Feb 12 20:23:45.005130 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 12 20:23:45.005167 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Feb 12 18:07:00 -00 2024 Feb 12 20:23:45.005213 kernel: efi: EFI v2.70 by EDK II Feb 12 20:23:45.005229 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7ac1aa98 MEMRESERVE=0x71a8cf98 Feb 12 20:23:45.005243 kernel: ACPI: Early table checksum verification disabled Feb 12 20:23:45.005256 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 12 20:23:45.005272 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 12 20:23:45.005287 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 12 20:23:45.005300 kernel: ACPI: DSDT 0x0000000078640000 00154F (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 12 20:23:45.005314 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 12 20:23:45.005333 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 12 20:23:45.005346 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 12 20:23:45.005360 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 12 20:23:45.005374 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 12 20:23:45.005390 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 12 20:23:45.005409 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 12 20:23:45.005423 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 12 20:23:45.005438 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 12 20:23:45.005452 kernel: printk: bootconsole [uart0] enabled Feb 12 20:23:45.005466 kernel: NUMA: Failed to initialise from firmware Feb 12 20:23:45.005481 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 12 20:23:45.005495 kernel: NUMA: NODE_DATA [mem 0x4b5841900-0x4b5846fff] Feb 12 20:23:45.005527 kernel: Zone ranges: Feb 12 20:23:45.005545 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 12 20:23:45.005560 kernel: DMA32 empty Feb 12 20:23:45.005574 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 12 20:23:45.005593 kernel: Movable zone start for each node Feb 12 20:23:45.005607 kernel: Early memory node ranges Feb 12 20:23:45.005622 kernel: node 0: [mem 0x0000000040000000-0x00000000786effff] Feb 12 20:23:45.005636 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 12 20:23:45.005650 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 12 20:23:45.005664 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 12 20:23:45.005678 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 12 20:23:45.005693 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 12 20:23:45.005707 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 12 20:23:45.005722 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 12 20:23:45.005736 kernel: psci: probing for conduit method from ACPI. Feb 12 20:23:45.005770 kernel: psci: PSCIv1.0 detected in firmware. Feb 12 20:23:45.005792 kernel: psci: Using standard PSCI v0.2 function IDs Feb 12 20:23:45.005807 kernel: psci: Trusted OS migration not required Feb 12 20:23:45.005828 kernel: psci: SMC Calling Convention v1.1 Feb 12 20:23:45.005844 kernel: ACPI: SRAT not present Feb 12 20:23:45.005859 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 12 20:23:45.005879 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 12 20:23:45.005894 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 12 20:23:45.005909 kernel: Detected PIPT I-cache on CPU0 Feb 12 20:23:45.005925 kernel: CPU features: detected: GIC system register CPU interface Feb 12 20:23:45.005940 kernel: CPU features: detected: Spectre-v2 Feb 12 20:23:45.005954 kernel: CPU features: detected: Spectre-v3a Feb 12 20:23:45.005969 kernel: CPU features: detected: Spectre-BHB Feb 12 20:23:45.005984 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 12 20:23:45.006000 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 12 20:23:45.006015 kernel: CPU features: detected: ARM erratum 1742098 Feb 12 20:23:45.006029 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 12 20:23:45.006048 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 12 20:23:45.006063 kernel: Policy zone: Normal Feb 12 20:23:45.006081 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=0a07ee1673be713cb46dc1305004c8854c4690dc8835a87e3bc71aa6c6a62e40 Feb 12 20:23:45.006097 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 20:23:45.006112 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 12 20:23:45.006128 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 12 20:23:45.006143 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 20:23:45.006158 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 12 20:23:45.006174 kernel: Memory: 3826316K/4030464K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 204148K reserved, 0K cma-reserved) Feb 12 20:23:45.006189 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 12 20:23:45.006208 kernel: trace event string verifier disabled Feb 12 20:23:45.006224 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 12 20:23:45.006239 kernel: rcu: RCU event tracing is enabled. Feb 12 20:23:45.006255 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 12 20:23:45.006270 kernel: Trampoline variant of Tasks RCU enabled. Feb 12 20:23:45.006285 kernel: Tracing variant of Tasks RCU enabled. Feb 12 20:23:45.006301 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 20:23:45.006316 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 12 20:23:45.006331 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 12 20:23:45.006346 kernel: GICv3: 96 SPIs implemented Feb 12 20:23:45.006361 kernel: GICv3: 0 Extended SPIs implemented Feb 12 20:23:45.006376 kernel: GICv3: Distributor has no Range Selector support Feb 12 20:23:45.006395 kernel: Root IRQ handler: gic_handle_irq Feb 12 20:23:45.006410 kernel: GICv3: 16 PPIs implemented Feb 12 20:23:45.006425 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 12 20:23:45.006440 kernel: ACPI: SRAT not present Feb 12 20:23:45.006454 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 12 20:23:45.006469 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000a0000 (indirect, esz 8, psz 64K, shr 1) Feb 12 20:23:45.006485 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000b0000 (flat, esz 8, psz 64K, shr 1) Feb 12 20:23:45.006500 kernel: GICv3: using LPI property table @0x00000004000c0000 Feb 12 20:23:45.006515 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 12 20:23:45.006530 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Feb 12 20:23:45.006545 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 12 20:23:45.006565 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 12 20:23:45.006580 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 12 20:23:45.006595 kernel: Console: colour dummy device 80x25 Feb 12 20:23:45.006611 kernel: printk: console [tty1] enabled Feb 12 20:23:45.006627 kernel: ACPI: Core revision 20210730 Feb 12 20:23:45.006643 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 12 20:23:45.006659 kernel: pid_max: default: 32768 minimum: 301 Feb 12 20:23:45.006675 kernel: LSM: Security Framework initializing Feb 12 20:23:45.006690 kernel: SELinux: Initializing. Feb 12 20:23:45.006706 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 20:23:45.006726 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 20:23:45.006742 kernel: rcu: Hierarchical SRCU implementation. Feb 12 20:23:45.007217 kernel: Platform MSI: ITS@0x10080000 domain created Feb 12 20:23:45.007243 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 12 20:23:45.007260 kernel: Remapping and enabling EFI services. Feb 12 20:23:45.007275 kernel: smp: Bringing up secondary CPUs ... Feb 12 20:23:45.007291 kernel: Detected PIPT I-cache on CPU1 Feb 12 20:23:45.007307 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 12 20:23:45.007323 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Feb 12 20:23:45.007345 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 12 20:23:45.007360 kernel: smp: Brought up 1 node, 2 CPUs Feb 12 20:23:45.007376 kernel: SMP: Total of 2 processors activated. Feb 12 20:23:45.007392 kernel: CPU features: detected: 32-bit EL0 Support Feb 12 20:23:45.007407 kernel: CPU features: detected: 32-bit EL1 Support Feb 12 20:23:45.007423 kernel: CPU features: detected: CRC32 instructions Feb 12 20:23:45.007438 kernel: CPU: All CPU(s) started at EL1 Feb 12 20:23:45.007454 kernel: alternatives: patching kernel code Feb 12 20:23:45.007469 kernel: devtmpfs: initialized Feb 12 20:23:45.007489 kernel: KASLR disabled due to lack of seed Feb 12 20:23:45.007505 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 20:23:45.007521 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 12 20:23:45.007547 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 20:23:45.007568 kernel: SMBIOS 3.0.0 present. Feb 12 20:23:45.007584 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 12 20:23:45.007600 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 20:23:45.007616 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 12 20:23:45.007632 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 12 20:23:45.007649 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 12 20:23:45.007667 kernel: audit: initializing netlink subsys (disabled) Feb 12 20:23:45.007688 kernel: audit: type=2000 audit(0.250:1): state=initialized audit_enabled=0 res=1 Feb 12 20:23:45.007709 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 20:23:45.007725 kernel: cpuidle: using governor menu Feb 12 20:23:45.007741 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 12 20:23:45.007782 kernel: ASID allocator initialised with 32768 entries Feb 12 20:23:45.007800 kernel: ACPI: bus type PCI registered Feb 12 20:23:45.007822 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 20:23:45.007838 kernel: Serial: AMBA PL011 UART driver Feb 12 20:23:45.007855 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 12 20:23:45.007871 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 12 20:23:45.007887 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 20:23:45.007903 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 12 20:23:45.007919 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 20:23:45.007935 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 12 20:23:45.007951 kernel: ACPI: Added _OSI(Module Device) Feb 12 20:23:45.007972 kernel: ACPI: Added _OSI(Processor Device) Feb 12 20:23:45.007988 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 20:23:45.008004 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 20:23:45.008020 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 20:23:45.008036 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 20:23:45.008052 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 20:23:45.008068 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 12 20:23:45.008085 kernel: ACPI: Interpreter enabled Feb 12 20:23:45.008101 kernel: ACPI: Using GIC for interrupt routing Feb 12 20:23:45.008121 kernel: ACPI: MCFG table detected, 1 entries Feb 12 20:23:45.008138 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 12 20:23:45.008439 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 12 20:23:45.008646 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 12 20:23:45.008886 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 12 20:23:45.009090 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 12 20:23:45.009317 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 12 20:23:45.009348 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 12 20:23:45.009365 kernel: acpiphp: Slot [1] registered Feb 12 20:23:45.009381 kernel: acpiphp: Slot [2] registered Feb 12 20:23:45.009397 kernel: acpiphp: Slot [3] registered Feb 12 20:23:45.009413 kernel: acpiphp: Slot [4] registered Feb 12 20:23:45.009430 kernel: acpiphp: Slot [5] registered Feb 12 20:23:45.009445 kernel: acpiphp: Slot [6] registered Feb 12 20:23:45.009461 kernel: acpiphp: Slot [7] registered Feb 12 20:23:45.009477 kernel: acpiphp: Slot [8] registered Feb 12 20:23:45.009498 kernel: acpiphp: Slot [9] registered Feb 12 20:23:45.009514 kernel: acpiphp: Slot [10] registered Feb 12 20:23:45.009530 kernel: acpiphp: Slot [11] registered Feb 12 20:23:45.009546 kernel: acpiphp: Slot [12] registered Feb 12 20:23:45.009561 kernel: acpiphp: Slot [13] registered Feb 12 20:23:45.009578 kernel: acpiphp: Slot [14] registered Feb 12 20:23:45.009594 kernel: acpiphp: Slot [15] registered Feb 12 20:23:45.009610 kernel: acpiphp: Slot [16] registered Feb 12 20:23:45.009626 kernel: acpiphp: Slot [17] registered Feb 12 20:23:45.009642 kernel: acpiphp: Slot [18] registered Feb 12 20:23:45.009663 kernel: acpiphp: Slot [19] registered Feb 12 20:23:45.009679 kernel: acpiphp: Slot [20] registered Feb 12 20:23:45.009695 kernel: acpiphp: Slot [21] registered Feb 12 20:23:45.009711 kernel: acpiphp: Slot [22] registered Feb 12 20:23:45.009727 kernel: acpiphp: Slot [23] registered Feb 12 20:23:45.009743 kernel: acpiphp: Slot [24] registered Feb 12 20:23:45.009797 kernel: acpiphp: Slot [25] registered Feb 12 20:23:45.009815 kernel: acpiphp: Slot [26] registered Feb 12 20:23:45.009831 kernel: acpiphp: Slot [27] registered Feb 12 20:23:45.009853 kernel: acpiphp: Slot [28] registered Feb 12 20:23:45.009870 kernel: acpiphp: Slot [29] registered Feb 12 20:23:45.009886 kernel: acpiphp: Slot [30] registered Feb 12 20:23:45.009902 kernel: acpiphp: Slot [31] registered Feb 12 20:23:45.009918 kernel: PCI host bridge to bus 0000:00 Feb 12 20:23:45.010183 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 12 20:23:45.010386 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 12 20:23:45.010582 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 12 20:23:45.014521 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 12 20:23:45.014821 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 12 20:23:45.015089 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 12 20:23:45.015308 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 12 20:23:45.015532 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 12 20:23:45.015737 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 12 20:23:45.016016 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 12 20:23:45.016231 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 12 20:23:45.016431 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 12 20:23:45.016630 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 12 20:23:45.016856 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 12 20:23:45.017063 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 12 20:23:45.017293 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 12 20:23:45.017506 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 12 20:23:45.017709 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 12 20:23:45.017935 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 12 20:23:45.018150 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 12 20:23:45.018338 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 12 20:23:45.018516 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 12 20:23:45.018700 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 12 20:23:45.018727 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 12 20:23:45.018760 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 12 20:23:45.018783 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 12 20:23:45.018800 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 12 20:23:45.018817 kernel: iommu: Default domain type: Translated Feb 12 20:23:45.018834 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 12 20:23:45.018851 kernel: vgaarb: loaded Feb 12 20:23:45.018868 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 20:23:45.018884 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 12 20:23:45.018907 kernel: PTP clock support registered Feb 12 20:23:45.018923 kernel: Registered efivars operations Feb 12 20:23:45.018940 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 12 20:23:45.018956 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 20:23:45.018973 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 20:23:45.018990 kernel: pnp: PnP ACPI init Feb 12 20:23:45.019218 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 12 20:23:45.019243 kernel: pnp: PnP ACPI: found 1 devices Feb 12 20:23:45.019260 kernel: NET: Registered PF_INET protocol family Feb 12 20:23:45.019282 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 12 20:23:45.019299 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 12 20:23:45.019316 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 20:23:45.019332 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 12 20:23:45.019349 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 12 20:23:45.019365 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 12 20:23:45.019381 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 20:23:45.019398 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 20:23:45.019414 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 20:23:45.019435 kernel: PCI: CLS 0 bytes, default 64 Feb 12 20:23:45.019451 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 12 20:23:45.019468 kernel: kvm [1]: HYP mode not available Feb 12 20:23:45.019484 kernel: Initialise system trusted keyrings Feb 12 20:23:45.019501 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 12 20:23:45.019517 kernel: Key type asymmetric registered Feb 12 20:23:45.019534 kernel: Asymmetric key parser 'x509' registered Feb 12 20:23:45.019550 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 20:23:45.019566 kernel: io scheduler mq-deadline registered Feb 12 20:23:45.019586 kernel: io scheduler kyber registered Feb 12 20:23:45.019602 kernel: io scheduler bfq registered Feb 12 20:23:45.019847 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 12 20:23:45.019873 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 12 20:23:45.019890 kernel: ACPI: button: Power Button [PWRB] Feb 12 20:23:45.019907 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 20:23:45.019924 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 12 20:23:45.020131 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 12 20:23:45.020159 kernel: printk: console [ttyS0] disabled Feb 12 20:23:45.020177 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 12 20:23:45.020193 kernel: printk: console [ttyS0] enabled Feb 12 20:23:45.020209 kernel: printk: bootconsole [uart0] disabled Feb 12 20:23:45.020225 kernel: thunder_xcv, ver 1.0 Feb 12 20:23:45.020242 kernel: thunder_bgx, ver 1.0 Feb 12 20:23:45.020258 kernel: nicpf, ver 1.0 Feb 12 20:23:45.020274 kernel: nicvf, ver 1.0 Feb 12 20:23:45.020499 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 12 20:23:45.020691 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-12T20:23:44 UTC (1707769424) Feb 12 20:23:45.020714 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 12 20:23:45.020731 kernel: NET: Registered PF_INET6 protocol family Feb 12 20:23:45.020763 kernel: Segment Routing with IPv6 Feb 12 20:23:45.020784 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 20:23:45.020801 kernel: NET: Registered PF_PACKET protocol family Feb 12 20:23:45.020817 kernel: Key type dns_resolver registered Feb 12 20:23:45.020834 kernel: registered taskstats version 1 Feb 12 20:23:45.020855 kernel: Loading compiled-in X.509 certificates Feb 12 20:23:45.020872 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: c8c3faa6fd8ae0112832fff0e3d0e58448a7eb6c' Feb 12 20:23:45.020888 kernel: Key type .fscrypt registered Feb 12 20:23:45.020904 kernel: Key type fscrypt-provisioning registered Feb 12 20:23:45.020919 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 12 20:23:45.020935 kernel: ima: Allocated hash algorithm: sha1 Feb 12 20:23:45.020951 kernel: ima: No architecture policies found Feb 12 20:23:45.020967 kernel: Freeing unused kernel memory: 34688K Feb 12 20:23:45.020983 kernel: Run /init as init process Feb 12 20:23:45.021003 kernel: with arguments: Feb 12 20:23:45.021020 kernel: /init Feb 12 20:23:45.021035 kernel: with environment: Feb 12 20:23:45.021051 kernel: HOME=/ Feb 12 20:23:45.021067 kernel: TERM=linux Feb 12 20:23:45.021083 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 20:23:45.021104 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 20:23:45.021125 systemd[1]: Detected virtualization amazon. Feb 12 20:23:45.021148 systemd[1]: Detected architecture arm64. Feb 12 20:23:45.021167 systemd[1]: Running in initrd. Feb 12 20:23:45.021206 systemd[1]: No hostname configured, using default hostname. Feb 12 20:23:45.021224 systemd[1]: Hostname set to . Feb 12 20:23:45.021243 systemd[1]: Initializing machine ID from VM UUID. Feb 12 20:23:45.021262 systemd[1]: Queued start job for default target initrd.target. Feb 12 20:23:45.021279 systemd[1]: Started systemd-ask-password-console.path. Feb 12 20:23:45.021296 systemd[1]: Reached target cryptsetup.target. Feb 12 20:23:45.021319 systemd[1]: Reached target paths.target. Feb 12 20:23:45.021337 systemd[1]: Reached target slices.target. Feb 12 20:23:45.021354 systemd[1]: Reached target swap.target. Feb 12 20:23:45.021371 systemd[1]: Reached target timers.target. Feb 12 20:23:45.021389 systemd[1]: Listening on iscsid.socket. Feb 12 20:23:45.021407 systemd[1]: Listening on iscsiuio.socket. Feb 12 20:23:45.021425 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 20:23:45.021442 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 20:23:45.021464 systemd[1]: Listening on systemd-journald.socket. Feb 12 20:23:45.021482 systemd[1]: Listening on systemd-networkd.socket. Feb 12 20:23:45.021499 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 20:23:45.021517 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 20:23:45.021534 systemd[1]: Reached target sockets.target. Feb 12 20:23:45.021552 systemd[1]: Starting kmod-static-nodes.service... Feb 12 20:23:45.021569 systemd[1]: Finished network-cleanup.service. Feb 12 20:23:45.021587 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 20:23:45.021604 systemd[1]: Starting systemd-journald.service... Feb 12 20:23:45.021626 systemd[1]: Starting systemd-modules-load.service... Feb 12 20:23:45.021644 systemd[1]: Starting systemd-resolved.service... Feb 12 20:23:45.021661 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 20:23:45.021678 systemd[1]: Finished kmod-static-nodes.service. Feb 12 20:23:45.021696 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 20:23:45.021713 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 20:23:45.021732 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 20:23:45.021772 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 20:23:45.021801 systemd-journald[309]: Journal started Feb 12 20:23:45.021915 systemd-journald[309]: Runtime Journal (/run/log/journal/ec2ddd1afa5ef58b4d72935ec0f16de8) is 8.0M, max 75.4M, 67.4M free. Feb 12 20:23:45.034077 systemd[1]: Started systemd-journald.service. Feb 12 20:23:45.034141 kernel: audit: type=1130 audit(1707769425.023:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:45.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:44.966256 systemd-modules-load[310]: Inserted module 'overlay' Feb 12 20:23:45.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:45.038696 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 20:23:45.049965 kernel: audit: type=1130 audit(1707769425.037:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:45.058080 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 20:23:45.075942 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 20:23:45.075979 kernel: audit: type=1130 audit(1707769425.058:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:45.076004 kernel: Bridge firewalling registered Feb 12 20:23:45.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:45.061476 systemd[1]: Starting dracut-cmdline.service... Feb 12 20:23:45.076462 systemd-modules-load[310]: Inserted module 'br_netfilter' Feb 12 20:23:45.098783 kernel: SCSI subsystem initialized Feb 12 20:23:45.113782 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 20:23:45.114803 systemd-resolved[311]: Positive Trust Anchors: Feb 12 20:23:45.121814 kernel: device-mapper: uevent: version 1.0.3 Feb 12 20:23:45.121848 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 20:23:45.114828 systemd-resolved[311]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 20:23:45.114883 systemd-resolved[311]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 20:23:45.144566 dracut-cmdline[326]: dracut-dracut-053 Feb 12 20:23:45.148138 dracut-cmdline[326]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=0a07ee1673be713cb46dc1305004c8854c4690dc8835a87e3bc71aa6c6a62e40 Feb 12 20:23:45.167186 systemd-modules-load[310]: Inserted module 'dm_multipath' Feb 12 20:23:45.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:45.172692 systemd[1]: Finished systemd-modules-load.service. Feb 12 20:23:45.175925 systemd[1]: Starting systemd-sysctl.service... Feb 12 20:23:45.192795 kernel: audit: type=1130 audit(1707769425.172:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:45.207628 systemd[1]: Finished systemd-sysctl.service. Feb 12 20:23:45.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:45.219783 kernel: audit: type=1130 audit(1707769425.207:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:45.306785 kernel: Loading iSCSI transport class v2.0-870. Feb 12 20:23:45.318782 kernel: iscsi: registered transport (tcp) Feb 12 20:23:45.342776 kernel: iscsi: registered transport (qla4xxx) Feb 12 20:23:45.342847 kernel: QLogic iSCSI HBA Driver Feb 12 20:23:45.549698 systemd-resolved[311]: Defaulting to hostname 'linux'. Feb 12 20:23:45.551626 kernel: random: crng init done Feb 12 20:23:45.553110 systemd[1]: Started systemd-resolved.service. Feb 12 20:23:45.568116 kernel: audit: type=1130 audit(1707769425.553:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:45.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:45.555044 systemd[1]: Reached target nss-lookup.target. Feb 12 20:23:45.582467 systemd[1]: Finished dracut-cmdline.service. Feb 12 20:23:45.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:45.587155 systemd[1]: Starting dracut-pre-udev.service... Feb 12 20:23:45.597918 kernel: audit: type=1130 audit(1707769425.583:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:45.653806 kernel: raid6: neonx8 gen() 6358 MB/s Feb 12 20:23:45.671781 kernel: raid6: neonx8 xor() 4738 MB/s Feb 12 20:23:45.689782 kernel: raid6: neonx4 gen() 6386 MB/s Feb 12 20:23:45.707801 kernel: raid6: neonx4 xor() 4899 MB/s Feb 12 20:23:45.725782 kernel: raid6: neonx2 gen() 5661 MB/s Feb 12 20:23:45.743794 kernel: raid6: neonx2 xor() 4504 MB/s Feb 12 20:23:45.761791 kernel: raid6: neonx1 gen() 4419 MB/s Feb 12 20:23:45.779794 kernel: raid6: neonx1 xor() 3674 MB/s Feb 12 20:23:45.797789 kernel: raid6: int64x8 gen() 3392 MB/s Feb 12 20:23:45.815792 kernel: raid6: int64x8 xor() 2096 MB/s Feb 12 20:23:45.833794 kernel: raid6: int64x4 gen() 3750 MB/s Feb 12 20:23:45.851787 kernel: raid6: int64x4 xor() 2200 MB/s Feb 12 20:23:45.869791 kernel: raid6: int64x2 gen() 3539 MB/s Feb 12 20:23:45.887788 kernel: raid6: int64x2 xor() 1951 MB/s Feb 12 20:23:45.905797 kernel: raid6: int64x1 gen() 2764 MB/s Feb 12 20:23:45.925299 kernel: raid6: int64x1 xor() 1453 MB/s Feb 12 20:23:45.925350 kernel: raid6: using algorithm neonx4 gen() 6386 MB/s Feb 12 20:23:45.925374 kernel: raid6: .... xor() 4899 MB/s, rmw enabled Feb 12 20:23:45.927118 kernel: raid6: using neon recovery algorithm Feb 12 20:23:45.945785 kernel: xor: measuring software checksum speed Feb 12 20:23:45.948782 kernel: 8regs : 9332 MB/sec Feb 12 20:23:45.950779 kernel: 32regs : 11098 MB/sec Feb 12 20:23:45.954985 kernel: arm64_neon : 9661 MB/sec Feb 12 20:23:45.955017 kernel: xor: using function: 32regs (11098 MB/sec) Feb 12 20:23:46.045797 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 12 20:23:46.063061 systemd[1]: Finished dracut-pre-udev.service. Feb 12 20:23:46.077155 kernel: audit: type=1130 audit(1707769426.063:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:46.077209 kernel: audit: type=1334 audit(1707769426.064:10): prog-id=7 op=LOAD Feb 12 20:23:46.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:46.064000 audit: BPF prog-id=7 op=LOAD Feb 12 20:23:46.064000 audit: BPF prog-id=8 op=LOAD Feb 12 20:23:46.077871 systemd[1]: Starting systemd-udevd.service... Feb 12 20:23:46.106377 systemd-udevd[508]: Using default interface naming scheme 'v252'. Feb 12 20:23:46.116853 systemd[1]: Started systemd-udevd.service. Feb 12 20:23:46.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:46.124103 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 20:23:46.152296 dracut-pre-trigger[518]: rd.md=0: removing MD RAID activation Feb 12 20:23:46.214563 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 20:23:46.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:46.217777 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 20:23:46.326250 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 20:23:46.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:46.436605 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 12 20:23:46.436672 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 12 20:23:46.447556 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 12 20:23:46.447907 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 12 20:23:46.450782 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 12 20:23:46.452774 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 12 20:23:46.463795 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 12 20:23:46.469780 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:fa:c8:31:bf:7b Feb 12 20:23:46.470082 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 12 20:23:46.473918 kernel: GPT:9289727 != 16777215 Feb 12 20:23:46.473959 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 12 20:23:46.473983 kernel: GPT:9289727 != 16777215 Feb 12 20:23:46.474005 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 12 20:23:46.474035 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 12 20:23:46.484915 (udev-worker)[561]: Network interface NamePolicy= disabled on kernel command line. Feb 12 20:23:46.567804 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (558) Feb 12 20:23:46.584228 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 20:23:46.628903 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 20:23:46.678317 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 20:23:46.709348 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 20:23:46.711782 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 20:23:46.726251 systemd[1]: Starting disk-uuid.service... Feb 12 20:23:46.738318 disk-uuid[663]: Primary Header is updated. Feb 12 20:23:46.738318 disk-uuid[663]: Secondary Entries is updated. Feb 12 20:23:46.738318 disk-uuid[663]: Secondary Header is updated. Feb 12 20:23:46.754629 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 12 20:23:46.757781 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 12 20:23:46.766774 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 12 20:23:47.766695 disk-uuid[664]: The operation has completed successfully. Feb 12 20:23:47.769979 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 12 20:23:47.930086 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 20:23:47.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:47.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:47.930305 systemd[1]: Finished disk-uuid.service. Feb 12 20:23:47.955507 systemd[1]: Starting verity-setup.service... Feb 12 20:23:47.981793 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 12 20:23:48.051387 systemd[1]: Found device dev-mapper-usr.device. Feb 12 20:23:48.055395 systemd[1]: Finished verity-setup.service. Feb 12 20:23:48.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:48.060231 systemd[1]: Mounting sysusr-usr.mount... Feb 12 20:23:48.144797 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 20:23:48.146340 systemd[1]: Mounted sysusr-usr.mount. Feb 12 20:23:48.149280 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 12 20:23:48.153280 systemd[1]: Starting ignition-setup.service... Feb 12 20:23:48.158638 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 20:23:48.184996 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 12 20:23:48.185069 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 12 20:23:48.187262 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 12 20:23:48.195808 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 12 20:23:48.213338 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 20:23:48.248722 systemd[1]: Finished ignition-setup.service. Feb 12 20:23:48.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:48.253829 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 20:23:48.307973 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 20:23:48.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:48.310000 audit: BPF prog-id=9 op=LOAD Feb 12 20:23:48.313860 systemd[1]: Starting systemd-networkd.service... Feb 12 20:23:48.361575 systemd-networkd[1178]: lo: Link UP Feb 12 20:23:48.361600 systemd-networkd[1178]: lo: Gained carrier Feb 12 20:23:48.365393 systemd-networkd[1178]: Enumeration completed Feb 12 20:23:48.365551 systemd[1]: Started systemd-networkd.service. Feb 12 20:23:48.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:48.369466 systemd[1]: Reached target network.target. Feb 12 20:23:48.379107 systemd-networkd[1178]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 20:23:48.383098 systemd[1]: Starting iscsiuio.service... Feb 12 20:23:48.385008 systemd-networkd[1178]: eth0: Link UP Feb 12 20:23:48.385017 systemd-networkd[1178]: eth0: Gained carrier Feb 12 20:23:48.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:48.397490 systemd[1]: Started iscsiuio.service. Feb 12 20:23:48.403680 systemd[1]: Starting iscsid.service... Feb 12 20:23:48.417980 iscsid[1183]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 20:23:48.417980 iscsid[1183]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 12 20:23:48.417980 iscsid[1183]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 20:23:48.417980 iscsid[1183]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 20:23:48.417980 iscsid[1183]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 20:23:48.417980 iscsid[1183]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 20:23:48.421021 systemd-networkd[1178]: eth0: DHCPv4 address 172.31.16.103/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 12 20:23:48.446915 systemd[1]: Started iscsid.service. Feb 12 20:23:48.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:48.451378 systemd[1]: Starting dracut-initqueue.service... Feb 12 20:23:48.478537 systemd[1]: Finished dracut-initqueue.service. Feb 12 20:23:48.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:48.480667 systemd[1]: Reached target remote-fs-pre.target. Feb 12 20:23:48.485887 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 20:23:48.489633 systemd[1]: Reached target remote-fs.target. Feb 12 20:23:48.494265 systemd[1]: Starting dracut-pre-mount.service... Feb 12 20:23:48.511840 systemd[1]: Finished dracut-pre-mount.service. Feb 12 20:23:48.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:49.017641 ignition[1134]: Ignition 2.14.0 Feb 12 20:23:49.017671 ignition[1134]: Stage: fetch-offline Feb 12 20:23:49.018025 ignition[1134]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:23:49.018093 ignition[1134]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 20:23:49.038181 ignition[1134]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 20:23:49.041013 ignition[1134]: Ignition finished successfully Feb 12 20:23:49.043897 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 20:23:49.057265 kernel: kauditd_printk_skb: 15 callbacks suppressed Feb 12 20:23:49.057574 kernel: audit: type=1130 audit(1707769429.044:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:49.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:49.047501 systemd[1]: Starting ignition-fetch.service... Feb 12 20:23:49.068967 ignition[1202]: Ignition 2.14.0 Feb 12 20:23:49.068997 ignition[1202]: Stage: fetch Feb 12 20:23:49.069372 ignition[1202]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:23:49.069441 ignition[1202]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 20:23:49.086381 ignition[1202]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 20:23:49.088932 ignition[1202]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 20:23:49.098424 ignition[1202]: INFO : PUT result: OK Feb 12 20:23:49.101944 ignition[1202]: DEBUG : parsed url from cmdline: "" Feb 12 20:23:49.101944 ignition[1202]: INFO : no config URL provided Feb 12 20:23:49.101944 ignition[1202]: INFO : reading system config file "/usr/lib/ignition/user.ign" Feb 12 20:23:49.101944 ignition[1202]: INFO : no config at "/usr/lib/ignition/user.ign" Feb 12 20:23:49.101944 ignition[1202]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 20:23:49.127623 ignition[1202]: INFO : PUT result: OK Feb 12 20:23:49.127623 ignition[1202]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 12 20:23:49.127623 ignition[1202]: INFO : GET result: OK Feb 12 20:23:49.127623 ignition[1202]: DEBUG : parsing config with SHA512: 5e0180a64d5a9b397665e38a1a83eb74288e96f6ec43d50071a9f4f00ef7b40397531715bca7c5349199cc25971ab7727410a9b90498e337db370e0e0a0ecea2 Feb 12 20:23:49.173146 unknown[1202]: fetched base config from "system" Feb 12 20:23:49.173461 unknown[1202]: fetched base config from "system" Feb 12 20:23:49.174836 ignition[1202]: fetch: fetch complete Feb 12 20:23:49.173481 unknown[1202]: fetched user config from "aws" Feb 12 20:23:49.174853 ignition[1202]: fetch: fetch passed Feb 12 20:23:49.174983 ignition[1202]: Ignition finished successfully Feb 12 20:23:49.186512 systemd[1]: Finished ignition-fetch.service. Feb 12 20:23:49.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:49.191489 systemd[1]: Starting ignition-kargs.service... Feb 12 20:23:49.201521 kernel: audit: type=1130 audit(1707769429.188:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:49.215484 ignition[1208]: Ignition 2.14.0 Feb 12 20:23:49.215514 ignition[1208]: Stage: kargs Feb 12 20:23:49.215853 ignition[1208]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:23:49.215914 ignition[1208]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 20:23:49.231408 ignition[1208]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 20:23:49.234211 ignition[1208]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 20:23:49.239616 ignition[1208]: INFO : PUT result: OK Feb 12 20:23:49.245046 ignition[1208]: kargs: kargs passed Feb 12 20:23:49.245161 ignition[1208]: Ignition finished successfully Feb 12 20:23:49.248475 systemd[1]: Finished ignition-kargs.service. Feb 12 20:23:49.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:49.261801 kernel: audit: type=1130 audit(1707769429.250:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:49.262822 systemd[1]: Starting ignition-disks.service... Feb 12 20:23:49.277817 ignition[1214]: Ignition 2.14.0 Feb 12 20:23:49.277848 ignition[1214]: Stage: disks Feb 12 20:23:49.278155 ignition[1214]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:23:49.278212 ignition[1214]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 20:23:49.293621 ignition[1214]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 20:23:49.296372 ignition[1214]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 20:23:49.299166 ignition[1214]: INFO : PUT result: OK Feb 12 20:23:49.304664 ignition[1214]: disks: disks passed Feb 12 20:23:49.304822 ignition[1214]: Ignition finished successfully Feb 12 20:23:49.309076 systemd[1]: Finished ignition-disks.service. Feb 12 20:23:49.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:49.311207 systemd[1]: Reached target initrd-root-device.target. Feb 12 20:23:49.324271 kernel: audit: type=1130 audit(1707769429.308:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:49.326072 systemd[1]: Reached target local-fs-pre.target. Feb 12 20:23:49.327849 systemd[1]: Reached target local-fs.target. Feb 12 20:23:49.331940 systemd[1]: Reached target sysinit.target. Feb 12 20:23:49.338009 systemd[1]: Reached target basic.target. Feb 12 20:23:49.342690 systemd[1]: Starting systemd-fsck-root.service... Feb 12 20:23:49.379184 systemd-fsck[1222]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 12 20:23:49.402992 systemd[1]: Finished systemd-fsck-root.service. Feb 12 20:23:49.408445 systemd[1]: Mounting sysroot.mount... Feb 12 20:23:49.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:49.424821 kernel: audit: type=1130 audit(1707769429.404:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:49.433799 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 20:23:49.434844 systemd[1]: Mounted sysroot.mount. Feb 12 20:23:49.437645 systemd-networkd[1178]: eth0: Gained IPv6LL Feb 12 20:23:49.438029 systemd[1]: Reached target initrd-root-fs.target. Feb 12 20:23:49.455486 systemd[1]: Mounting sysroot-usr.mount... Feb 12 20:23:49.459601 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 12 20:23:49.460294 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 20:23:49.460352 systemd[1]: Reached target ignition-diskful.target. Feb 12 20:23:49.475446 systemd[1]: Mounted sysroot-usr.mount. Feb 12 20:23:49.495963 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 20:23:49.501263 systemd[1]: Starting initrd-setup-root.service... Feb 12 20:23:49.512649 initrd-setup-root[1244]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 20:23:49.525799 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1239) Feb 12 20:23:49.532656 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 12 20:23:49.532741 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 12 20:23:49.532814 initrd-setup-root[1252]: cut: /sysroot/etc/group: No such file or directory Feb 12 20:23:49.537567 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 12 20:23:49.543113 initrd-setup-root[1268]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 20:23:49.554409 initrd-setup-root[1284]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 20:23:49.560810 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 12 20:23:49.566426 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 20:23:49.806001 systemd[1]: Finished initrd-setup-root.service. Feb 12 20:23:49.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:49.810997 systemd[1]: Starting ignition-mount.service... Feb 12 20:23:49.820425 kernel: audit: type=1130 audit(1707769429.808:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:49.824196 systemd[1]: Starting sysroot-boot.service... Feb 12 20:23:49.832049 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 12 20:23:49.832243 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 12 20:23:49.873329 ignition[1305]: INFO : Ignition 2.14.0 Feb 12 20:23:49.873329 ignition[1305]: INFO : Stage: mount Feb 12 20:23:49.879540 ignition[1305]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:23:49.879540 ignition[1305]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 20:23:49.883547 systemd[1]: Finished sysroot-boot.service. Feb 12 20:23:49.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:49.897825 kernel: audit: type=1130 audit(1707769429.888:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:49.904868 ignition[1305]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 20:23:49.907411 ignition[1305]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 20:23:49.910787 ignition[1305]: INFO : PUT result: OK Feb 12 20:23:49.916007 ignition[1305]: INFO : mount: mount passed Feb 12 20:23:49.917742 ignition[1305]: INFO : Ignition finished successfully Feb 12 20:23:49.921104 systemd[1]: Finished ignition-mount.service. Feb 12 20:23:49.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:49.931738 systemd[1]: Starting ignition-files.service... Feb 12 20:23:49.935779 kernel: audit: type=1130 audit(1707769429.919:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:49.947366 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 20:23:49.964191 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1314) Feb 12 20:23:49.969698 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 12 20:23:49.969732 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 12 20:23:49.969777 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 12 20:23:49.978793 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 12 20:23:49.982842 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 20:23:50.001413 ignition[1333]: INFO : Ignition 2.14.0 Feb 12 20:23:50.001413 ignition[1333]: INFO : Stage: files Feb 12 20:23:50.004916 ignition[1333]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:23:50.004916 ignition[1333]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 20:23:50.021912 ignition[1333]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 20:23:50.024465 ignition[1333]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 20:23:50.027783 ignition[1333]: INFO : PUT result: OK Feb 12 20:23:50.033129 ignition[1333]: DEBUG : files: compiled without relabeling support, skipping Feb 12 20:23:50.035936 ignition[1333]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 20:23:50.038762 ignition[1333]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 20:23:50.103200 ignition[1333]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 20:23:50.106158 ignition[1333]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 20:23:50.109363 ignition[1333]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 20:23:50.108097 unknown[1333]: wrote ssh authorized keys file for user: core Feb 12 20:23:50.114349 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 12 20:23:50.114349 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 12 20:23:50.114349 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 12 20:23:50.114349 ignition[1333]: INFO : GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Feb 12 20:23:50.442957 ignition[1333]: INFO : GET result: OK Feb 12 20:23:50.950541 ignition[1333]: DEBUG : file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Feb 12 20:23:50.955380 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 12 20:23:50.955380 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 12 20:23:50.963488 ignition[1333]: INFO : GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-arm64.tar.gz: attempt #1 Feb 12 20:23:51.213498 ignition[1333]: INFO : GET result: OK Feb 12 20:23:51.512971 ignition[1333]: DEBUG : file matches expected sum of: 4c7e4541123cbd6f1d6fec1f827395cd58d65716c0998de790f965485738b6d6257c0dc46fd7f66403166c299f6d5bf9ff30b6e1ff9afbb071f17005e834518c Feb 12 20:23:51.517920 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 12 20:23:51.517920 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Feb 12 20:23:51.525573 ignition[1333]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 12 20:23:51.536505 ignition[1333]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem783518756" Feb 12 20:23:51.539616 ignition[1333]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem783518756": device or resource busy Feb 12 20:23:51.543120 ignition[1333]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem783518756", trying btrfs: device or resource busy Feb 12 20:23:51.546852 ignition[1333]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem783518756" Feb 12 20:23:51.554436 ignition[1333]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem783518756" Feb 12 20:23:51.569330 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1336) Feb 12 20:23:51.571248 ignition[1333]: INFO : op(3): [started] unmounting "/mnt/oem783518756" Feb 12 20:23:51.571248 ignition[1333]: INFO : op(3): [finished] unmounting "/mnt/oem783518756" Feb 12 20:23:51.571248 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Feb 12 20:23:51.589506 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 20:23:51.593145 ignition[1333]: INFO : GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubeadm: attempt #1 Feb 12 20:23:51.598555 systemd[1]: mnt-oem783518756.mount: Deactivated successfully. Feb 12 20:23:51.715384 ignition[1333]: INFO : GET result: OK Feb 12 20:23:52.331106 ignition[1333]: DEBUG : file matches expected sum of: 46c9f489062bdb84574703f7339d140d7e42c9c71b367cd860071108a3c1d38fabda2ef69f9c0ff88f7c80e88d38f96ab2248d4c9a6c9c60b0a4c20fd640d0db Feb 12 20:23:52.336519 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 20:23:52.336519 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 20:23:52.336519 ignition[1333]: INFO : GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubelet: attempt #1 Feb 12 20:23:52.401025 ignition[1333]: INFO : GET result: OK Feb 12 20:23:53.984013 ignition[1333]: DEBUG : file matches expected sum of: 0e4ee1f23bf768c49d09beb13a6b5fad6efc8e3e685e7c5610188763e3af55923fb46158b5e76973a0f9a055f9b30d525b467c53415f965536adc2f04d9cf18d Feb 12 20:23:53.989248 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 20:23:53.989248 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/install.sh" Feb 12 20:23:53.989248 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 20:23:53.989248 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 20:23:54.003264 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 20:23:54.006960 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 20:23:54.010820 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 20:23:54.014518 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Feb 12 20:23:54.018309 ignition[1333]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 12 20:23:54.030216 ignition[1333]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3100796256" Feb 12 20:23:54.030216 ignition[1333]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3100796256": device or resource busy Feb 12 20:23:54.030216 ignition[1333]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3100796256", trying btrfs: device or resource busy Feb 12 20:23:54.030216 ignition[1333]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3100796256" Feb 12 20:23:54.030216 ignition[1333]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3100796256" Feb 12 20:23:54.030216 ignition[1333]: INFO : op(6): [started] unmounting "/mnt/oem3100796256" Feb 12 20:23:54.030216 ignition[1333]: INFO : op(6): [finished] unmounting "/mnt/oem3100796256" Feb 12 20:23:54.030216 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Feb 12 20:23:54.030216 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Feb 12 20:23:54.030216 ignition[1333]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 12 20:23:54.066225 systemd[1]: mnt-oem3100796256.mount: Deactivated successfully. Feb 12 20:23:54.086525 ignition[1333]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem30404123" Feb 12 20:23:54.089466 ignition[1333]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem30404123": device or resource busy Feb 12 20:23:54.089466 ignition[1333]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem30404123", trying btrfs: device or resource busy Feb 12 20:23:54.089466 ignition[1333]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem30404123" Feb 12 20:23:54.113393 ignition[1333]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem30404123" Feb 12 20:23:54.113393 ignition[1333]: INFO : op(9): [started] unmounting "/mnt/oem30404123" Feb 12 20:23:54.112173 systemd[1]: mnt-oem30404123.mount: Deactivated successfully. Feb 12 20:23:54.120950 ignition[1333]: INFO : op(9): [finished] unmounting "/mnt/oem30404123" Feb 12 20:23:54.120950 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Feb 12 20:23:54.120950 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 12 20:23:54.120950 ignition[1333]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 12 20:23:54.144037 ignition[1333]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2908290021" Feb 12 20:23:54.147091 ignition[1333]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2908290021": device or resource busy Feb 12 20:23:54.147091 ignition[1333]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2908290021", trying btrfs: device or resource busy Feb 12 20:23:54.147091 ignition[1333]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2908290021" Feb 12 20:23:54.147091 ignition[1333]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2908290021" Feb 12 20:23:54.161910 ignition[1333]: INFO : op(c): [started] unmounting "/mnt/oem2908290021" Feb 12 20:23:54.169328 ignition[1333]: INFO : op(c): [finished] unmounting "/mnt/oem2908290021" Feb 12 20:23:54.172023 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 12 20:23:54.172023 ignition[1333]: INFO : files: op(f): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 12 20:23:54.172023 ignition[1333]: INFO : files: op(f): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 12 20:23:54.172023 ignition[1333]: INFO : files: op(10): [started] processing unit "amazon-ssm-agent.service" Feb 12 20:23:54.172023 ignition[1333]: INFO : files: op(10): op(11): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Feb 12 20:23:54.172023 ignition[1333]: INFO : files: op(10): op(11): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Feb 12 20:23:54.172023 ignition[1333]: INFO : files: op(10): [finished] processing unit "amazon-ssm-agent.service" Feb 12 20:23:54.172023 ignition[1333]: INFO : files: op(12): [started] processing unit "nvidia.service" Feb 12 20:23:54.172023 ignition[1333]: INFO : files: op(12): [finished] processing unit "nvidia.service" Feb 12 20:23:54.172023 ignition[1333]: INFO : files: op(13): [started] processing unit "containerd.service" Feb 12 20:23:54.172023 ignition[1333]: INFO : files: op(13): op(14): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 12 20:23:54.209510 ignition[1333]: INFO : files: op(13): op(14): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 12 20:23:54.209510 ignition[1333]: INFO : files: op(13): [finished] processing unit "containerd.service" Feb 12 20:23:54.209510 ignition[1333]: INFO : files: op(15): [started] processing unit "prepare-cni-plugins.service" Feb 12 20:23:54.209510 ignition[1333]: INFO : files: op(15): op(16): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 20:23:54.209510 ignition[1333]: INFO : files: op(15): op(16): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 20:23:54.209510 ignition[1333]: INFO : files: op(15): [finished] processing unit "prepare-cni-plugins.service" Feb 12 20:23:54.209510 ignition[1333]: INFO : files: op(17): [started] processing unit "prepare-critools.service" Feb 12 20:23:54.209510 ignition[1333]: INFO : files: op(17): op(18): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 20:23:54.209510 ignition[1333]: INFO : files: op(17): op(18): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 20:23:54.209510 ignition[1333]: INFO : files: op(17): [finished] processing unit "prepare-critools.service" Feb 12 20:23:54.209510 ignition[1333]: INFO : files: op(19): [started] setting preset to enabled for "nvidia.service" Feb 12 20:23:54.209510 ignition[1333]: INFO : files: op(19): [finished] setting preset to enabled for "nvidia.service" Feb 12 20:23:54.209510 ignition[1333]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 20:23:54.209510 ignition[1333]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 20:23:54.209510 ignition[1333]: INFO : files: op(1b): [started] setting preset to enabled for "prepare-critools.service" Feb 12 20:23:54.209510 ignition[1333]: INFO : files: op(1b): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 20:23:54.209510 ignition[1333]: INFO : files: op(1c): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 12 20:23:54.209510 ignition[1333]: INFO : files: op(1c): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 12 20:23:54.209510 ignition[1333]: INFO : files: op(1d): [started] setting preset to enabled for "amazon-ssm-agent.service" Feb 12 20:23:54.273533 ignition[1333]: INFO : files: op(1d): [finished] setting preset to enabled for "amazon-ssm-agent.service" Feb 12 20:23:54.273533 ignition[1333]: INFO : files: createResultFile: createFiles: op(1e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 20:23:54.273533 ignition[1333]: INFO : files: createResultFile: createFiles: op(1e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 20:23:54.273533 ignition[1333]: INFO : files: files passed Feb 12 20:23:54.273533 ignition[1333]: INFO : Ignition finished successfully Feb 12 20:23:54.288928 systemd[1]: Finished ignition-files.service. Feb 12 20:23:54.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:54.300785 kernel: audit: type=1130 audit(1707769434.290:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:54.302498 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 20:23:54.307273 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 20:23:54.308876 systemd[1]: Starting ignition-quench.service... Feb 12 20:23:54.320372 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 20:23:54.322431 systemd[1]: Finished ignition-quench.service. Feb 12 20:23:54.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:54.323000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:54.334838 kernel: audit: type=1130 audit(1707769434.323:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:54.343795 kernel: audit: type=1131 audit(1707769434.323:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:54.345283 initrd-setup-root-after-ignition[1358]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 20:23:54.350198 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 20:23:54.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:54.354413 systemd[1]: Reached target ignition-complete.target. Feb 12 20:23:54.365218 systemd[1]: Starting initrd-parse-etc.service... Feb 12 20:23:54.377941 kernel: audit: type=1130 audit(1707769434.352:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:54.395503 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 20:23:54.414231 kernel: audit: type=1130 audit(1707769434.395:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:54.414279 kernel: audit: type=1131 audit(1707769434.395:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:54.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:54.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:54.395696 systemd[1]: Finished initrd-parse-etc.service. Feb 12 20:23:54.414329 systemd[1]: Reached target initrd-fs.target. Feb 12 20:23:54.416125 systemd[1]: Reached target initrd.target. Feb 12 20:23:54.422208 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 20:23:54.426160 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 20:23:54.451119 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 20:23:54.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:54.455827 systemd[1]: Starting initrd-cleanup.service... Feb 12 20:23:54.464844 kernel: audit: type=1130 audit(1707769434.452:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:54.478406 systemd[1]: Stopped target nss-lookup.target. Feb 12 20:23:54.481896 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 20:23:54.485614 systemd[1]: Stopped target timers.target. Feb 12 20:23:54.488723 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 20:23:54.490893 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 20:23:54.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:54.494376 systemd[1]: Stopped target initrd.target. Feb 12 20:23:54.503056 kernel: audit: type=1131 audit(1707769434.492:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:54.504888 systemd[1]: Stopped target basic.target. Feb 12 20:23:54.508242 systemd[1]: Stopped target ignition-complete.target. Feb 12 20:23:54.512139 systemd[1]: Stopped target ignition-diskful.target. Feb 12 20:23:54.515841 systemd[1]: Stopped target initrd-root-device.target. Feb 12 20:23:54.519654 systemd[1]: Stopped target remote-fs.target. Feb 12 20:23:54.523001 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 20:23:54.526481 systemd[1]: Stopped target sysinit.target. Feb 12 20:23:54.529791 systemd[1]: Stopped target local-fs.target. Feb 12 20:23:54.532870 systemd[1]: Stopped target local-fs-pre.target. Feb 12 20:23:54.536133 systemd[1]: Stopped target swap.target. Feb 12 20:23:54.537793 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 20:23:54.539331 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 20:23:54.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:54.544884 systemd[1]: Stopped target cryptsetup.target. Feb 12 20:23:54.553814 kernel: audit: type=1131 audit(1707769434.543:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:54.555213 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 20:23:54.557368 systemd[1]: Stopped dracut-initqueue.service. Feb 12 20:23:54.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:54.560775 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 20:23:54.561002 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 20:23:54.569780 kernel: audit: type=1131 audit(1707769434.559:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:54.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:54.573962 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 20:23:54.576028 systemd[1]: Stopped ignition-files.service. Feb 12 20:23:54.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:54.580729 systemd[1]: Stopping ignition-mount.service... Feb 12 20:23:54.583387 systemd[1]: Stopping iscsiuio.service... Feb 12 20:23:54.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:54.586407 systemd[1]: Stopping sysroot-boot.service... Feb 12 20:23:54.587974 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 20:23:54.588387 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 20:23:54.590706 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 20:23:54.592461 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 20:23:54.611110 ignition[1371]: INFO : Ignition 2.14.0 Feb 12 20:23:54.612939 ignition[1371]: INFO : Stage: umount Feb 12 20:23:54.612939 ignition[1371]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:23:54.612939 ignition[1371]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 20:23:54.629511 ignition[1371]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 20:23:54.632067 ignition[1371]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 20:23:54.635062 ignition[1371]: INFO : PUT result: OK Feb 12 20:23:54.640043 ignition[1371]: INFO : umount: umount passed Feb 12 20:23:54.641838 ignition[1371]: INFO : Ignition finished successfully Feb 12 20:23:54.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:54.651561 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 12 20:23:54.653539 systemd[1]: Stopped iscsiuio.service. Feb 12 20:23:54.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:54.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:54.655890 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 20:23:54.656067 systemd[1]: Stopped ignition-mount.service. Feb 12 20:23:54.663222 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 20:23:54.663872 systemd[1]: Stopped ignition-disks.service. Feb 12 20:23:54.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:54.668780 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 20:23:54.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:54.668886 systemd[1]: Stopped ignition-kargs.service. Feb 12 20:23:54.671413 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 12 20:23:54.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:54.671500 systemd[1]: Stopped ignition-fetch.service. Feb 12 20:23:54.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:54.676286 systemd[1]: Stopped target network.target. Feb 12 20:23:54.679418 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 20:23:54.679517 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 20:23:54.681910 systemd[1]: Stopped target paths.target. Feb 12 20:23:54.692925 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 20:23:54.696849 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 20:23:54.700620 systemd[1]: Stopped target slices.target. Feb 12 20:23:54.703786 systemd[1]: Stopped target sockets.target. Feb 12 20:23:54.706926 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 20:23:54.708682 systemd[1]: Closed iscsid.socket. Feb 12 20:23:54.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:54.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:54.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:54.710016 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 20:23:54.710090 systemd[1]: Closed iscsiuio.socket. Feb 12 20:23:54.710291 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 20:23:54.710378 systemd[1]: Stopped ignition-setup.service. Feb 12 20:23:54.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:54.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:54.711049 systemd[1]: Stopping systemd-networkd.service... Feb 12 20:23:54.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:54.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:54.740000 audit: BPF prog-id=6 op=UNLOAD Feb 12 20:23:54.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:54.711407 systemd[1]: Stopping systemd-resolved.service... Feb 12 20:23:54.712191 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 20:23:54.712376 systemd[1]: Finished initrd-cleanup.service. Feb 12 20:23:54.720828 systemd-networkd[1178]: eth0: DHCPv6 lease lost Feb 12 20:23:54.750000 audit: BPF prog-id=9 op=UNLOAD Feb 12 20:23:54.722710 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 20:23:54.724695 systemd[1]: Stopped systemd-networkd.service. Feb 12 20:23:54.728871 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 20:23:54.729071 systemd[1]: Stopped systemd-resolved.service. Feb 12 20:23:54.731354 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 20:23:54.731437 systemd[1]: Closed systemd-networkd.socket. Feb 12 20:23:54.734971 systemd[1]: Stopping network-cleanup.service... Feb 12 20:23:54.737269 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 20:23:54.737392 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 20:23:54.739407 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 20:23:54.739490 systemd[1]: Stopped systemd-sysctl.service. Feb 12 20:23:54.741459 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 20:23:54.741549 systemd[1]: Stopped systemd-modules-load.service. Feb 12 20:23:54.746352 systemd[1]: Stopping systemd-udevd.service... Feb 12 20:23:54.781424 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 20:23:54.783719 systemd[1]: Stopped systemd-udevd.service. Feb 12 20:23:54.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:54.788242 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 20:23:54.788331 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 20:23:54.794000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:54.794000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:54.794000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:54.794000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:54.790410 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 20:23:54.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:54.819000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:54.790608 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 20:23:54.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:54.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:54.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:54.793990 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 20:23:54.794075 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 20:23:54.796093 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 20:23:54.796179 systemd[1]: Stopped dracut-cmdline.service. Feb 12 20:23:54.796526 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 20:23:54.796598 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 20:23:54.798216 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 20:23:54.798793 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 12 20:23:54.798906 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 12 20:23:54.807153 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 20:23:54.807265 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 20:23:54.814982 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 20:23:54.815083 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 20:23:54.822202 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 20:23:54.822426 systemd[1]: Stopped network-cleanup.service. Feb 12 20:23:54.824879 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 20:23:54.825089 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 20:23:55.031396 systemd[1]: mnt-oem2908290021.mount: Deactivated successfully. Feb 12 20:23:55.031580 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 12 20:23:55.036105 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 12 20:23:55.036243 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 12 20:23:55.165919 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 20:23:55.166143 systemd[1]: Stopped sysroot-boot.service. Feb 12 20:23:55.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:55.172785 systemd[1]: Reached target initrd-switch-root.target. Feb 12 20:23:55.176460 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 20:23:55.176595 systemd[1]: Stopped initrd-setup-root.service. Feb 12 20:23:55.178000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:55.183394 systemd[1]: Starting initrd-switch-root.service... Feb 12 20:23:55.203206 systemd[1]: Switching root. Feb 12 20:23:55.206000 audit: BPF prog-id=5 op=UNLOAD Feb 12 20:23:55.206000 audit: BPF prog-id=4 op=UNLOAD Feb 12 20:23:55.206000 audit: BPF prog-id=3 op=UNLOAD Feb 12 20:23:55.212000 audit: BPF prog-id=8 op=UNLOAD Feb 12 20:23:55.212000 audit: BPF prog-id=7 op=UNLOAD Feb 12 20:23:55.238073 iscsid[1183]: iscsid shutting down. Feb 12 20:23:55.239696 systemd-journald[309]: Received SIGTERM from PID 1 (n/a). Feb 12 20:23:55.239820 systemd-journald[309]: Journal stopped Feb 12 20:24:05.981655 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 20:24:05.981827 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 20:24:05.981873 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 20:24:05.981908 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 20:24:05.981952 kernel: SELinux: policy capability open_perms=1 Feb 12 20:24:05.981986 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 20:24:05.982020 kernel: SELinux: policy capability always_check_network=0 Feb 12 20:24:05.982050 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 20:24:05.982080 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 20:24:05.982114 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 20:24:05.982152 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 20:24:05.982188 systemd[1]: Successfully loaded SELinux policy in 184.533ms. Feb 12 20:24:05.982250 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.771ms. Feb 12 20:24:05.982291 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 20:24:05.982324 systemd[1]: Detected virtualization amazon. Feb 12 20:24:05.982356 systemd[1]: Detected architecture arm64. Feb 12 20:24:05.982390 systemd[1]: Detected first boot. Feb 12 20:24:05.982427 systemd[1]: Initializing machine ID from VM UUID. Feb 12 20:24:05.982462 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 20:24:05.982495 systemd[1]: Populated /etc with preset unit settings. Feb 12 20:24:05.982529 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:24:05.982564 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:24:05.982602 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:24:05.982652 systemd[1]: Queued start job for default target multi-user.target. Feb 12 20:24:05.982683 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 20:24:05.982727 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 20:24:05.984571 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 12 20:24:05.984628 systemd[1]: Created slice system-getty.slice. Feb 12 20:24:05.984661 systemd[1]: Created slice system-modprobe.slice. Feb 12 20:24:05.984697 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 20:24:05.984730 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 20:24:05.984814 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 20:24:05.984852 systemd[1]: Created slice user.slice. Feb 12 20:24:05.984884 systemd[1]: Started systemd-ask-password-console.path. Feb 12 20:24:05.984915 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 20:24:05.984960 systemd[1]: Set up automount boot.automount. Feb 12 20:24:05.984992 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 20:24:05.985024 systemd[1]: Reached target integritysetup.target. Feb 12 20:24:05.985058 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 20:24:05.985092 systemd[1]: Reached target remote-fs.target. Feb 12 20:24:05.985125 systemd[1]: Reached target slices.target. Feb 12 20:24:05.985158 systemd[1]: Reached target swap.target. Feb 12 20:24:05.985188 systemd[1]: Reached target torcx.target. Feb 12 20:24:05.985226 systemd[1]: Reached target veritysetup.target. Feb 12 20:24:05.985280 systemd[1]: Listening on systemd-coredump.socket. Feb 12 20:24:05.985321 systemd[1]: Listening on systemd-initctl.socket. Feb 12 20:24:05.985353 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 20:24:05.985384 kernel: kauditd_printk_skb: 48 callbacks suppressed Feb 12 20:24:05.985417 kernel: audit: type=1400 audit(1707769445.567:85): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 20:24:05.985451 kernel: audit: type=1335 audit(1707769445.567:86): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 12 20:24:05.985483 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 20:24:05.985519 systemd[1]: Listening on systemd-journald.socket. Feb 12 20:24:05.985551 systemd[1]: Listening on systemd-networkd.socket. Feb 12 20:24:05.985581 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 20:24:05.985618 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 20:24:05.985648 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 20:24:05.985682 systemd[1]: Mounting dev-hugepages.mount... Feb 12 20:24:05.985719 systemd[1]: Mounting dev-mqueue.mount... Feb 12 20:24:05.985807 systemd[1]: Mounting media.mount... Feb 12 20:24:05.985851 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 20:24:05.985887 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 20:24:05.985924 systemd[1]: Mounting tmp.mount... Feb 12 20:24:05.985957 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 20:24:05.985990 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 20:24:05.986020 systemd[1]: Starting kmod-static-nodes.service... Feb 12 20:24:05.986054 systemd[1]: Starting modprobe@configfs.service... Feb 12 20:24:05.986086 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 20:24:05.986120 systemd[1]: Starting modprobe@drm.service... Feb 12 20:24:05.986152 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 20:24:05.986184 systemd[1]: Starting modprobe@fuse.service... Feb 12 20:24:05.986219 systemd[1]: Starting modprobe@loop.service... Feb 12 20:24:05.986251 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 20:24:05.986284 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 12 20:24:05.986316 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 12 20:24:05.986346 systemd[1]: Starting systemd-journald.service... Feb 12 20:24:05.986381 kernel: fuse: init (API version 7.34) Feb 12 20:24:05.986411 systemd[1]: Starting systemd-modules-load.service... Feb 12 20:24:05.986443 systemd[1]: Starting systemd-network-generator.service... Feb 12 20:24:05.986480 systemd[1]: Starting systemd-remount-fs.service... Feb 12 20:24:05.986513 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 20:24:05.986546 systemd[1]: Mounted dev-hugepages.mount. Feb 12 20:24:05.986576 systemd[1]: Mounted dev-mqueue.mount. Feb 12 20:24:05.986607 systemd[1]: Mounted media.mount. Feb 12 20:24:05.986637 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 20:24:05.986667 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 20:24:05.986698 systemd[1]: Mounted tmp.mount. Feb 12 20:24:05.986728 systemd[1]: Finished kmod-static-nodes.service. Feb 12 20:24:05.986799 kernel: audit: type=1130 audit(1707769445.864:87): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:05.986842 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 20:24:05.986873 kernel: loop: module loaded Feb 12 20:24:05.986903 systemd[1]: Finished modprobe@configfs.service. Feb 12 20:24:05.986937 kernel: audit: type=1130 audit(1707769445.893:88): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:05.986966 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 20:24:05.986996 kernel: audit: type=1131 audit(1707769445.893:89): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:05.987028 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 20:24:05.987063 kernel: audit: type=1130 audit(1707769445.923:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:05.987103 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 20:24:05.987134 kernel: audit: type=1131 audit(1707769445.923:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:05.987166 systemd[1]: Finished modprobe@drm.service. Feb 12 20:24:05.987198 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 20:24:05.987229 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 20:24:05.987261 kernel: audit: type=1130 audit(1707769445.948:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:05.987294 kernel: audit: type=1131 audit(1707769445.948:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:05.987332 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 20:24:05.987363 kernel: audit: type=1305 audit(1707769445.964:94): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 20:24:05.987395 systemd[1]: Finished modprobe@fuse.service. Feb 12 20:24:05.987430 systemd-journald[1526]: Journal started Feb 12 20:24:05.987559 systemd-journald[1526]: Runtime Journal (/run/log/journal/ec2ddd1afa5ef58b4d72935ec0f16de8) is 8.0M, max 75.4M, 67.4M free. Feb 12 20:24:05.567000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 20:24:05.567000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 12 20:24:05.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:05.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:05.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:05.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:05.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:05.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:05.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:05.964000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 20:24:05.964000 audit[1526]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffda948f30 a2=4000 a3=1 items=0 ppid=1 pid=1526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:05.964000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 20:24:05.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:05.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:05.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:05.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:05.994031 systemd[1]: Started systemd-journald.service. Feb 12 20:24:05.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:05.996013 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 20:24:05.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:05.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:06.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:05.996540 systemd[1]: Finished modprobe@loop.service. Feb 12 20:24:06.000069 systemd[1]: Finished systemd-modules-load.service. Feb 12 20:24:06.003164 systemd[1]: Finished systemd-network-generator.service. Feb 12 20:24:06.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:06.008360 systemd[1]: Finished systemd-remount-fs.service. Feb 12 20:24:06.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:06.013958 systemd[1]: Reached target network-pre.target. Feb 12 20:24:06.018709 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 20:24:06.028893 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 20:24:06.030718 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 20:24:06.037390 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 20:24:06.052931 systemd[1]: Starting systemd-journal-flush.service... Feb 12 20:24:06.055044 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 20:24:06.060691 systemd[1]: Starting systemd-random-seed.service... Feb 12 20:24:06.064668 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 20:24:06.068095 systemd[1]: Starting systemd-sysctl.service... Feb 12 20:24:06.073515 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 20:24:06.077544 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 20:24:06.098067 systemd-journald[1526]: Time spent on flushing to /var/log/journal/ec2ddd1afa5ef58b4d72935ec0f16de8 is 86.446ms for 1089 entries. Feb 12 20:24:06.098067 systemd-journald[1526]: System Journal (/var/log/journal/ec2ddd1afa5ef58b4d72935ec0f16de8) is 8.0M, max 195.6M, 187.6M free. Feb 12 20:24:06.195660 systemd-journald[1526]: Received client request to flush runtime journal. Feb 12 20:24:06.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:06.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:06.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:06.110660 systemd[1]: Finished systemd-random-seed.service. Feb 12 20:24:06.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:06.112873 systemd[1]: Reached target first-boot-complete.target. Feb 12 20:24:06.145437 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 20:24:06.151352 systemd[1]: Starting systemd-sysusers.service... Feb 12 20:24:06.167440 systemd[1]: Finished systemd-sysctl.service. Feb 12 20:24:06.197626 systemd[1]: Finished systemd-journal-flush.service. Feb 12 20:24:06.288699 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 20:24:06.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:06.293704 systemd[1]: Starting systemd-udev-settle.service... Feb 12 20:24:06.314197 udevadm[1576]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 12 20:24:06.337565 systemd[1]: Finished systemd-sysusers.service. Feb 12 20:24:06.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:06.342724 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 20:24:06.485943 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 20:24:06.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:07.202611 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 20:24:07.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:07.207062 systemd[1]: Starting systemd-udevd.service... Feb 12 20:24:07.251138 systemd-udevd[1582]: Using default interface naming scheme 'v252'. Feb 12 20:24:07.302938 systemd[1]: Started systemd-udevd.service. Feb 12 20:24:07.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:07.307911 systemd[1]: Starting systemd-networkd.service... Feb 12 20:24:07.318858 systemd[1]: Starting systemd-userdbd.service... Feb 12 20:24:07.386888 systemd[1]: Found device dev-ttyS0.device. Feb 12 20:24:07.432298 systemd[1]: Started systemd-userdbd.service. Feb 12 20:24:07.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:07.470355 (udev-worker)[1587]: Network interface NamePolicy= disabled on kernel command line. Feb 12 20:24:07.631935 systemd-networkd[1586]: lo: Link UP Feb 12 20:24:07.631957 systemd-networkd[1586]: lo: Gained carrier Feb 12 20:24:07.632929 systemd-networkd[1586]: Enumeration completed Feb 12 20:24:07.633131 systemd[1]: Started systemd-networkd.service. Feb 12 20:24:07.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:07.637165 systemd-networkd[1586]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 20:24:07.638410 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 20:24:07.644785 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 20:24:07.645504 systemd-networkd[1586]: eth0: Link UP Feb 12 20:24:07.646070 systemd-networkd[1586]: eth0: Gained carrier Feb 12 20:24:07.664146 systemd-networkd[1586]: eth0: DHCPv4 address 172.31.16.103/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 12 20:24:07.693819 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1600) Feb 12 20:24:07.822174 systemd[1]: Finished systemd-udev-settle.service. Feb 12 20:24:07.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:07.835259 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Feb 12 20:24:07.838082 systemd[1]: Starting lvm2-activation-early.service... Feb 12 20:24:07.907875 lvm[1702]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 20:24:07.947865 systemd[1]: Finished lvm2-activation-early.service. Feb 12 20:24:07.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:07.950130 systemd[1]: Reached target cryptsetup.target. Feb 12 20:24:07.955109 systemd[1]: Starting lvm2-activation.service... Feb 12 20:24:07.966174 lvm[1704]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 20:24:08.002051 systemd[1]: Finished lvm2-activation.service. Feb 12 20:24:08.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:08.004268 systemd[1]: Reached target local-fs-pre.target. Feb 12 20:24:08.006282 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 20:24:08.006348 systemd[1]: Reached target local-fs.target. Feb 12 20:24:08.008345 systemd[1]: Reached target machines.target. Feb 12 20:24:08.016729 systemd[1]: Starting ldconfig.service... Feb 12 20:24:08.021965 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 20:24:08.022152 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:24:08.025164 systemd[1]: Starting systemd-boot-update.service... Feb 12 20:24:08.030250 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 20:24:08.036517 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 20:24:08.040016 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 20:24:08.040241 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 20:24:08.043333 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 20:24:08.077189 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1707 (bootctl) Feb 12 20:24:08.079712 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 20:24:08.087185 systemd-tmpfiles[1710]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 20:24:08.090242 systemd-tmpfiles[1710]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 20:24:08.094133 systemd-tmpfiles[1710]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 20:24:08.138343 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 20:24:08.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:08.267042 systemd-fsck[1716]: fsck.fat 4.2 (2021-01-31) Feb 12 20:24:08.267042 systemd-fsck[1716]: /dev/nvme0n1p1: 236 files, 113719/258078 clusters Feb 12 20:24:08.270281 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 20:24:08.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:08.275810 systemd[1]: Mounting boot.mount... Feb 12 20:24:08.305791 systemd[1]: Mounted boot.mount. Feb 12 20:24:08.339832 systemd[1]: Finished systemd-boot-update.service. Feb 12 20:24:08.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:08.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:08.549395 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 20:24:08.554709 systemd[1]: Starting audit-rules.service... Feb 12 20:24:08.559150 systemd[1]: Starting clean-ca-certificates.service... Feb 12 20:24:08.564109 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 20:24:08.580622 systemd[1]: Starting systemd-resolved.service... Feb 12 20:24:08.596106 systemd[1]: Starting systemd-timesyncd.service... Feb 12 20:24:08.607844 systemd[1]: Starting systemd-update-utmp.service... Feb 12 20:24:08.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:08.612514 systemd[1]: Finished clean-ca-certificates.service. Feb 12 20:24:08.615466 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 20:24:08.629000 audit[1745]: SYSTEM_BOOT pid=1745 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 20:24:08.634268 systemd[1]: Finished systemd-update-utmp.service. Feb 12 20:24:08.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:08.672284 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 20:24:08.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:08.761000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 20:24:08.761000 audit[1758]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffdd454ee0 a2=420 a3=0 items=0 ppid=1735 pid=1758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:08.761000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 20:24:08.763837 augenrules[1758]: No rules Feb 12 20:24:08.765729 systemd[1]: Finished audit-rules.service. Feb 12 20:24:08.839467 systemd-resolved[1738]: Positive Trust Anchors: Feb 12 20:24:08.839497 systemd-resolved[1738]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 20:24:08.839550 systemd-resolved[1738]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 20:24:08.857920 systemd[1]: Started systemd-timesyncd.service. Feb 12 20:24:08.860440 systemd[1]: Reached target time-set.target. Feb 12 20:24:08.864331 systemd-resolved[1738]: Defaulting to hostname 'linux'. Feb 12 20:24:08.869588 systemd[1]: Started systemd-resolved.service. Feb 12 20:24:08.871667 systemd[1]: Reached target network.target. Feb 12 20:24:08.873571 systemd[1]: Reached target nss-lookup.target. Feb 12 20:24:09.013985 systemd-networkd[1586]: eth0: Gained IPv6LL Feb 12 20:24:09.022154 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 20:24:09.024551 systemd[1]: Reached target network-online.target. Feb 12 20:24:09.588249 systemd-timesyncd[1740]: Contacted time server 208.113.130.146:123 (0.flatcar.pool.ntp.org). Feb 12 20:24:09.588543 systemd-timesyncd[1740]: Initial clock synchronization to Mon 2024-02-12 20:24:09.587856 UTC. Feb 12 20:24:09.588622 systemd-resolved[1738]: Clock change detected. Flushing caches. Feb 12 20:24:09.626380 ldconfig[1706]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 20:24:09.655153 systemd[1]: Finished ldconfig.service. Feb 12 20:24:09.660210 systemd[1]: Starting systemd-update-done.service... Feb 12 20:24:09.677120 systemd[1]: Finished systemd-update-done.service. Feb 12 20:24:09.679311 systemd[1]: Reached target sysinit.target. Feb 12 20:24:09.681344 systemd[1]: Started motdgen.path. Feb 12 20:24:09.683152 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 20:24:09.688462 systemd[1]: Started logrotate.timer. Feb 12 20:24:09.690547 systemd[1]: Started mdadm.timer. Feb 12 20:24:09.692159 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 20:24:09.694279 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 20:24:09.694340 systemd[1]: Reached target paths.target. Feb 12 20:24:09.696192 systemd[1]: Reached target timers.target. Feb 12 20:24:09.698722 systemd[1]: Listening on dbus.socket. Feb 12 20:24:09.702879 systemd[1]: Starting docker.socket... Feb 12 20:24:09.707289 systemd[1]: Listening on sshd.socket. Feb 12 20:24:09.709359 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:24:09.710527 systemd[1]: Listening on docker.socket. Feb 12 20:24:09.712710 systemd[1]: Reached target sockets.target. Feb 12 20:24:09.714827 systemd[1]: Reached target basic.target. Feb 12 20:24:09.717203 systemd[1]: System is tainted: cgroupsv1 Feb 12 20:24:09.717483 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 20:24:09.717681 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 20:24:09.720471 systemd[1]: Started amazon-ssm-agent.service. Feb 12 20:24:09.725832 systemd[1]: Starting containerd.service... Feb 12 20:24:09.730019 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 12 20:24:09.757626 systemd[1]: Starting dbus.service... Feb 12 20:24:09.762883 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 20:24:09.782136 systemd[1]: Starting extend-filesystems.service... Feb 12 20:24:09.788249 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 20:24:09.805814 systemd[1]: Starting motdgen.service... Feb 12 20:24:09.819748 systemd[1]: Started nvidia.service. Feb 12 20:24:09.845189 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 20:24:09.850169 systemd[1]: Starting prepare-critools.service... Feb 12 20:24:09.860384 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 20:24:09.864936 systemd[1]: Starting sshd-keygen.service... Feb 12 20:24:09.871787 systemd[1]: Starting systemd-logind.service... Feb 12 20:24:09.874574 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:24:09.875039 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 12 20:24:09.879205 systemd[1]: Starting update-engine.service... Feb 12 20:24:09.891178 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 20:24:09.943185 jq[1790]: true Feb 12 20:24:09.979847 jq[1777]: false Feb 12 20:24:09.962656 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 20:24:09.963264 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 20:24:09.994365 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 20:24:09.994927 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 20:24:10.013888 tar[1793]: crictl Feb 12 20:24:10.024441 tar[1801]: ./ Feb 12 20:24:10.024441 tar[1801]: ./macvlan Feb 12 20:24:10.038170 dbus-daemon[1776]: [system] SELinux support is enabled Feb 12 20:24:10.038595 systemd[1]: Started dbus.service. Feb 12 20:24:10.044064 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 20:24:10.044113 systemd[1]: Reached target system-config.target. Feb 12 20:24:10.046328 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 20:24:10.046371 systemd[1]: Reached target user-config.target. Feb 12 20:24:10.059911 systemd[1]: Created slice system-sshd.slice. Feb 12 20:24:10.092341 jq[1813]: true Feb 12 20:24:10.105267 extend-filesystems[1779]: Found nvme0n1 Feb 12 20:24:10.105267 extend-filesystems[1779]: Found nvme0n1p1 Feb 12 20:24:10.105267 extend-filesystems[1779]: Found nvme0n1p2 Feb 12 20:24:10.105267 extend-filesystems[1779]: Found nvme0n1p3 Feb 12 20:24:10.105267 extend-filesystems[1779]: Found usr Feb 12 20:24:10.105267 extend-filesystems[1779]: Found nvme0n1p4 Feb 12 20:24:10.105267 extend-filesystems[1779]: Found nvme0n1p6 Feb 12 20:24:10.105267 extend-filesystems[1779]: Found nvme0n1p7 Feb 12 20:24:10.105267 extend-filesystems[1779]: Found nvme0n1p9 Feb 12 20:24:10.105267 extend-filesystems[1779]: Checking size of /dev/nvme0n1p9 Feb 12 20:24:10.174713 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 20:24:10.174846 dbus-daemon[1776]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1586 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 12 20:24:10.175344 systemd[1]: Finished motdgen.service. Feb 12 20:24:10.193186 extend-filesystems[1779]: Resized partition /dev/nvme0n1p9 Feb 12 20:24:10.198268 systemd[1]: Starting systemd-hostnamed.service... Feb 12 20:24:10.222943 extend-filesystems[1839]: resize2fs 1.46.5 (30-Dec-2021) Feb 12 20:24:10.244709 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 20:24:10.248289 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 20:24:10.278004 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 12 20:24:10.292773 amazon-ssm-agent[1771]: 2024/02/12 20:24:10 Failed to load instance info from vault. RegistrationKey does not exist. Feb 12 20:24:10.296165 update_engine[1789]: I0212 20:24:10.295714 1789 main.cc:92] Flatcar Update Engine starting Feb 12 20:24:10.300421 amazon-ssm-agent[1771]: Initializing new seelog logger Feb 12 20:24:10.300871 amazon-ssm-agent[1771]: New Seelog Logger Creation Complete Feb 12 20:24:10.301264 systemd[1]: Started update-engine.service. Feb 12 20:24:10.306323 systemd[1]: Started locksmithd.service. Feb 12 20:24:10.311101 amazon-ssm-agent[1771]: 2024/02/12 20:24:10 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 12 20:24:10.312067 amazon-ssm-agent[1771]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 12 20:24:10.312258 update_engine[1789]: I0212 20:24:10.301847 1789 update_check_scheduler.cc:74] Next update check in 9m26s Feb 12 20:24:10.314180 amazon-ssm-agent[1771]: 2024/02/12 20:24:10 processing appconfig overrides Feb 12 20:24:10.352573 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 12 20:24:10.407740 env[1799]: time="2024-02-12T20:24:10.404034946Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 20:24:10.408345 extend-filesystems[1839]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 12 20:24:10.408345 extend-filesystems[1839]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 12 20:24:10.408345 extend-filesystems[1839]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 12 20:24:10.436474 extend-filesystems[1779]: Resized filesystem in /dev/nvme0n1p9 Feb 12 20:24:10.414480 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 20:24:10.415048 systemd[1]: Finished extend-filesystems.service. Feb 12 20:24:10.446851 bash[1864]: Updated "/home/core/.ssh/authorized_keys" Feb 12 20:24:10.447895 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 20:24:10.549694 tar[1801]: ./static Feb 12 20:24:10.590610 systemd-logind[1787]: Watching system buttons on /dev/input/event0 (Power Button) Feb 12 20:24:10.603569 systemd-logind[1787]: New seat seat0. Feb 12 20:24:10.605983 systemd[1]: nvidia.service: Deactivated successfully. Feb 12 20:24:10.623837 env[1799]: time="2024-02-12T20:24:10.623770403Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 20:24:10.632268 env[1799]: time="2024-02-12T20:24:10.632210903Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:24:10.639725 env[1799]: time="2024-02-12T20:24:10.639624995Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:24:10.640559 env[1799]: time="2024-02-12T20:24:10.640485275Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:24:10.642889 env[1799]: time="2024-02-12T20:24:10.642829127Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:24:10.645123 env[1799]: time="2024-02-12T20:24:10.645069755Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 20:24:10.645708 env[1799]: time="2024-02-12T20:24:10.645654191Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 20:24:10.646244 env[1799]: time="2024-02-12T20:24:10.646201475Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 20:24:10.649199 env[1799]: time="2024-02-12T20:24:10.649141391Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:24:10.671064 env[1799]: time="2024-02-12T20:24:10.670329083Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:24:10.671064 env[1799]: time="2024-02-12T20:24:10.670688471Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:24:10.671064 env[1799]: time="2024-02-12T20:24:10.670726631Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 20:24:10.671064 env[1799]: time="2024-02-12T20:24:10.670892363Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 20:24:10.671064 env[1799]: time="2024-02-12T20:24:10.670921247Z" level=info msg="metadata content store policy set" policy=shared Feb 12 20:24:10.678009 env[1799]: time="2024-02-12T20:24:10.677834567Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 20:24:10.678009 env[1799]: time="2024-02-12T20:24:10.677917103Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 20:24:10.679688 env[1799]: time="2024-02-12T20:24:10.677949791Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 20:24:10.679688 env[1799]: time="2024-02-12T20:24:10.678321407Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 20:24:10.679688 env[1799]: time="2024-02-12T20:24:10.678366239Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 20:24:10.679688 env[1799]: time="2024-02-12T20:24:10.678399827Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 20:24:10.679688 env[1799]: time="2024-02-12T20:24:10.678431615Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 20:24:10.679688 env[1799]: time="2024-02-12T20:24:10.678998927Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 20:24:10.679688 env[1799]: time="2024-02-12T20:24:10.679059671Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 20:24:10.679688 env[1799]: time="2024-02-12T20:24:10.679104935Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 20:24:10.679688 env[1799]: time="2024-02-12T20:24:10.679137287Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 20:24:10.679688 env[1799]: time="2024-02-12T20:24:10.679168283Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 20:24:10.679688 env[1799]: time="2024-02-12T20:24:10.679421207Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 20:24:10.679688 env[1799]: time="2024-02-12T20:24:10.679629371Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 20:24:10.700017 env[1799]: time="2024-02-12T20:24:10.698677523Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 20:24:10.700017 env[1799]: time="2024-02-12T20:24:10.698776319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 20:24:10.700017 env[1799]: time="2024-02-12T20:24:10.698812823Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 20:24:10.700017 env[1799]: time="2024-02-12T20:24:10.698922023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 20:24:10.700017 env[1799]: time="2024-02-12T20:24:10.698996999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 20:24:10.700017 env[1799]: time="2024-02-12T20:24:10.699035675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 20:24:10.700017 env[1799]: time="2024-02-12T20:24:10.699066875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 20:24:10.700017 env[1799]: time="2024-02-12T20:24:10.699097847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 20:24:10.700017 env[1799]: time="2024-02-12T20:24:10.699127523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 20:24:10.700017 env[1799]: time="2024-02-12T20:24:10.699157007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 20:24:10.700017 env[1799]: time="2024-02-12T20:24:10.699187763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 20:24:10.700017 env[1799]: time="2024-02-12T20:24:10.699222347Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 20:24:10.700017 env[1799]: time="2024-02-12T20:24:10.699537815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 20:24:10.700017 env[1799]: time="2024-02-12T20:24:10.699577895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 20:24:10.700017 env[1799]: time="2024-02-12T20:24:10.699618395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 20:24:10.702457 env[1799]: time="2024-02-12T20:24:10.699656423Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 20:24:10.702457 env[1799]: time="2024-02-12T20:24:10.699689183Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 20:24:10.702457 env[1799]: time="2024-02-12T20:24:10.699716435Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 20:24:10.702457 env[1799]: time="2024-02-12T20:24:10.699751871Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 20:24:10.702457 env[1799]: time="2024-02-12T20:24:10.699819767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 20:24:10.702805 env[1799]: time="2024-02-12T20:24:10.701303159Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 20:24:10.702805 env[1799]: time="2024-02-12T20:24:10.701436611Z" level=info msg="Connect containerd service" Feb 12 20:24:10.702805 env[1799]: time="2024-02-12T20:24:10.701511323Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 20:24:10.704119 env[1799]: time="2024-02-12T20:24:10.703671683Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 20:24:10.704850 env[1799]: time="2024-02-12T20:24:10.704796623Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 20:24:10.705891 env[1799]: time="2024-02-12T20:24:10.705836711Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 20:24:10.707141 systemd[1]: Started containerd.service. Feb 12 20:24:10.707778 env[1799]: time="2024-02-12T20:24:10.707729255Z" level=info msg="containerd successfully booted in 0.351306s" Feb 12 20:24:10.710115 env[1799]: time="2024-02-12T20:24:10.710023007Z" level=info msg="Start subscribing containerd event" Feb 12 20:24:10.710605 env[1799]: time="2024-02-12T20:24:10.710551295Z" level=info msg="Start recovering state" Feb 12 20:24:10.710805 env[1799]: time="2024-02-12T20:24:10.710764655Z" level=info msg="Start event monitor" Feb 12 20:24:10.710891 env[1799]: time="2024-02-12T20:24:10.710824055Z" level=info msg="Start snapshots syncer" Feb 12 20:24:10.710891 env[1799]: time="2024-02-12T20:24:10.710863595Z" level=info msg="Start cni network conf syncer for default" Feb 12 20:24:10.711079 env[1799]: time="2024-02-12T20:24:10.710886971Z" level=info msg="Start streaming server" Feb 12 20:24:10.724677 tar[1801]: ./vlan Feb 12 20:24:10.753554 systemd[1]: Started systemd-logind.service. Feb 12 20:24:10.900089 tar[1801]: ./portmap Feb 12 20:24:10.918686 dbus-daemon[1776]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 12 20:24:10.918984 systemd[1]: Started systemd-hostnamed.service. Feb 12 20:24:10.925265 dbus-daemon[1776]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1838 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 12 20:24:10.930538 systemd[1]: Starting polkit.service... Feb 12 20:24:10.966062 polkitd[1908]: Started polkitd version 121 Feb 12 20:24:11.008821 polkitd[1908]: Loading rules from directory /etc/polkit-1/rules.d Feb 12 20:24:11.008952 polkitd[1908]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 12 20:24:11.024430 polkitd[1908]: Finished loading, compiling and executing 2 rules Feb 12 20:24:11.027098 dbus-daemon[1776]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 12 20:24:11.027403 systemd[1]: Started polkit.service. Feb 12 20:24:11.032113 polkitd[1908]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 12 20:24:11.080995 systemd-hostnamed[1838]: Hostname set to (transient) Feb 12 20:24:11.081179 systemd-resolved[1738]: System hostname changed to 'ip-172-31-16-103'. Feb 12 20:24:11.086018 tar[1801]: ./host-local Feb 12 20:24:11.123035 coreos-metadata[1773]: Feb 12 20:24:11.122 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 12 20:24:11.130520 coreos-metadata[1773]: Feb 12 20:24:11.130 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Feb 12 20:24:11.131279 coreos-metadata[1773]: Feb 12 20:24:11.130 INFO Fetch successful Feb 12 20:24:11.131279 coreos-metadata[1773]: Feb 12 20:24:11.131 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 12 20:24:11.132211 coreos-metadata[1773]: Feb 12 20:24:11.132 INFO Fetch successful Feb 12 20:24:11.136161 unknown[1773]: wrote ssh authorized keys file for user: core Feb 12 20:24:11.152941 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO Create new startup processor Feb 12 20:24:11.153633 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO [LongRunningPluginsManager] registered plugins: {} Feb 12 20:24:11.156098 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO Initializing bookkeeping folders Feb 12 20:24:11.158651 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO removing the completed state files Feb 12 20:24:11.161515 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO Initializing bookkeeping folders for long running plugins Feb 12 20:24:11.161770 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Feb 12 20:24:11.161942 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO Initializing healthcheck folders for long running plugins Feb 12 20:24:11.162095 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO Initializing locations for inventory plugin Feb 12 20:24:11.162261 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO Initializing default location for custom inventory Feb 12 20:24:11.162416 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO Initializing default location for file inventory Feb 12 20:24:11.162577 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO Initializing default location for role inventory Feb 12 20:24:11.162861 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO Init the cloudwatchlogs publisher Feb 12 20:24:11.175547 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO [instanceID=i-0a86e49ec3470ca11] Successfully loaded platform independent plugin aws:refreshAssociation Feb 12 20:24:11.175547 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO [instanceID=i-0a86e49ec3470ca11] Successfully loaded platform independent plugin aws:runDocument Feb 12 20:24:11.177062 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO [instanceID=i-0a86e49ec3470ca11] Successfully loaded platform independent plugin aws:softwareInventory Feb 12 20:24:11.177062 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO [instanceID=i-0a86e49ec3470ca11] Successfully loaded platform independent plugin aws:updateSsmAgent Feb 12 20:24:11.177062 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO [instanceID=i-0a86e49ec3470ca11] Successfully loaded platform independent plugin aws:configureDocker Feb 12 20:24:11.177062 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO [instanceID=i-0a86e49ec3470ca11] Successfully loaded platform independent plugin aws:runDockerAction Feb 12 20:24:11.177062 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO [instanceID=i-0a86e49ec3470ca11] Successfully loaded platform independent plugin aws:configurePackage Feb 12 20:24:11.177062 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO [instanceID=i-0a86e49ec3470ca11] Successfully loaded platform independent plugin aws:downloadContent Feb 12 20:24:11.177062 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO [instanceID=i-0a86e49ec3470ca11] Successfully loaded platform independent plugin aws:runPowerShellScript Feb 12 20:24:11.177062 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO [instanceID=i-0a86e49ec3470ca11] Successfully loaded platform dependent plugin aws:runShellScript Feb 12 20:24:11.177062 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Feb 12 20:24:11.177062 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO OS: linux, Arch: arm64 Feb 12 20:24:11.188031 amazon-ssm-agent[1771]: datastore file /var/lib/amazon/ssm/i-0a86e49ec3470ca11/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Feb 12 20:24:11.189465 update-ssh-keys[1955]: Updated "/home/core/.ssh/authorized_keys" Feb 12 20:24:11.190618 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 12 20:24:11.218697 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO [MessagingDeliveryService] Starting document processing engine... Feb 12 20:24:11.255109 tar[1801]: ./vrf Feb 12 20:24:11.326683 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO [MessagingDeliveryService] [EngineProcessor] Starting Feb 12 20:24:11.364573 tar[1801]: ./bridge Feb 12 20:24:11.424751 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Feb 12 20:24:11.474885 tar[1801]: ./tuning Feb 12 20:24:11.516150 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO [MessagingDeliveryService] Starting message polling Feb 12 20:24:11.556327 tar[1801]: ./firewall Feb 12 20:24:11.610860 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO [MessagingDeliveryService] Starting send replies to MDS Feb 12 20:24:11.620766 tar[1801]: ./host-device Feb 12 20:24:11.676999 tar[1801]: ./sbr Feb 12 20:24:11.706644 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO [instanceID=i-0a86e49ec3470ca11] Starting association polling Feb 12 20:24:11.750348 tar[1801]: ./loopback Feb 12 20:24:11.801752 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Feb 12 20:24:11.840763 tar[1801]: ./dhcp Feb 12 20:24:11.897107 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO [MessagingDeliveryService] [Association] Launching response handler Feb 12 20:24:11.992760 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Feb 12 20:24:12.088532 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO [MessageGatewayService] Starting session document processing engine... Feb 12 20:24:12.101468 tar[1801]: ./ptp Feb 12 20:24:12.142148 systemd[1]: Finished prepare-critools.service. Feb 12 20:24:12.184116 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO [MessageGatewayService] [EngineProcessor] Starting Feb 12 20:24:12.186817 tar[1801]: ./ipvlan Feb 12 20:24:12.246710 tar[1801]: ./bandwidth Feb 12 20:24:12.280027 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Feb 12 20:24:12.337496 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 20:24:12.376193 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-0a86e49ec3470ca11, requestId: f6a7cf5e-93e8-487b-940d-2b35ff8ad3ab Feb 12 20:24:12.409690 locksmithd[1848]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 20:24:12.472427 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO [OfflineService] Starting document processing engine... Feb 12 20:24:12.568921 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO [OfflineService] [EngineProcessor] Starting Feb 12 20:24:12.665665 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO [OfflineService] [EngineProcessor] Initial processing Feb 12 20:24:12.762452 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO [OfflineService] Starting message polling Feb 12 20:24:12.859537 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO [OfflineService] Starting send replies to MDS Feb 12 20:24:12.956888 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO [LongRunningPluginsManager] starting long running plugin manager Feb 12 20:24:13.054263 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Feb 12 20:24:13.151938 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Feb 12 20:24:13.249872 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Feb 12 20:24:13.347842 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO [MessageGatewayService] listening reply. Feb 12 20:24:13.446128 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Feb 12 20:24:13.544620 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO [HealthCheck] HealthCheck reporting agent health. Feb 12 20:24:13.643177 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO [StartupProcessor] Executing startup processor tasks Feb 12 20:24:13.742089 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Feb 12 20:24:13.841235 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Feb 12 20:24:13.940427 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.2 Feb 12 20:24:13.959074 sshd_keygen[1819]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 20:24:13.995424 systemd[1]: Finished sshd-keygen.service. Feb 12 20:24:14.000524 systemd[1]: Starting issuegen.service... Feb 12 20:24:14.004903 systemd[1]: Started sshd@0-172.31.16.103:22-147.75.109.163:49960.service. Feb 12 20:24:14.018765 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 20:24:14.019349 systemd[1]: Finished issuegen.service. Feb 12 20:24:14.024502 systemd[1]: Starting systemd-user-sessions.service... Feb 12 20:24:14.039873 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0a86e49ec3470ca11?role=subscribe&stream=input Feb 12 20:24:14.042845 systemd[1]: Finished systemd-user-sessions.service. Feb 12 20:24:14.047672 systemd[1]: Started getty@tty1.service. Feb 12 20:24:14.052546 systemd[1]: Started serial-getty@ttyS0.service. Feb 12 20:24:14.055266 systemd[1]: Reached target getty.target. Feb 12 20:24:14.057301 systemd[1]: Reached target multi-user.target. Feb 12 20:24:14.062406 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 20:24:14.080698 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 20:24:14.081289 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 20:24:14.086371 systemd[1]: Startup finished in 14.037s (kernel) + 16.144s (userspace) = 30.181s. Feb 12 20:24:14.139578 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0a86e49ec3470ca11?role=subscribe&stream=input Feb 12 20:24:14.231400 sshd[2004]: Accepted publickey for core from 147.75.109.163 port 49960 ssh2: RSA SHA256:ecUhSIJgyplxxRcBUTSxTp+B0aPr5wgDdA3tvIID0Hc Feb 12 20:24:14.235496 sshd[2004]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:24:14.239418 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO [MessageGatewayService] Starting receiving message from control channel Feb 12 20:24:14.253440 systemd[1]: Created slice user-500.slice. Feb 12 20:24:14.255681 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 20:24:14.260601 systemd-logind[1787]: New session 1 of user core. Feb 12 20:24:14.277154 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 20:24:14.279851 systemd[1]: Starting user@500.service... Feb 12 20:24:14.291269 (systemd)[2018]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:24:14.339485 amazon-ssm-agent[1771]: 2024-02-12 20:24:11 INFO [MessageGatewayService] [EngineProcessor] Initial processing Feb 12 20:24:14.439825 amazon-ssm-agent[1771]: 2024-02-12 20:24:12 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Feb 12 20:24:14.478618 systemd[2018]: Queued start job for default target default.target. Feb 12 20:24:14.480027 systemd[2018]: Reached target paths.target. Feb 12 20:24:14.480083 systemd[2018]: Reached target sockets.target. Feb 12 20:24:14.480115 systemd[2018]: Reached target timers.target. Feb 12 20:24:14.480145 systemd[2018]: Reached target basic.target. Feb 12 20:24:14.480244 systemd[2018]: Reached target default.target. Feb 12 20:24:14.480308 systemd[2018]: Startup finished in 176ms. Feb 12 20:24:14.480363 systemd[1]: Started user@500.service. Feb 12 20:24:14.483114 systemd[1]: Started session-1.scope. Feb 12 20:24:14.627085 systemd[1]: Started sshd@1-172.31.16.103:22-147.75.109.163:45972.service. Feb 12 20:24:14.805474 sshd[2027]: Accepted publickey for core from 147.75.109.163 port 45972 ssh2: RSA SHA256:ecUhSIJgyplxxRcBUTSxTp+B0aPr5wgDdA3tvIID0Hc Feb 12 20:24:14.807907 sshd[2027]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:24:14.816240 systemd-logind[1787]: New session 2 of user core. Feb 12 20:24:14.817213 systemd[1]: Started session-2.scope. Feb 12 20:24:14.949452 sshd[2027]: pam_unix(sshd:session): session closed for user core Feb 12 20:24:14.954903 systemd-logind[1787]: Session 2 logged out. Waiting for processes to exit. Feb 12 20:24:14.956497 systemd[1]: sshd@1-172.31.16.103:22-147.75.109.163:45972.service: Deactivated successfully. Feb 12 20:24:14.958036 systemd[1]: session-2.scope: Deactivated successfully. Feb 12 20:24:14.960414 systemd-logind[1787]: Removed session 2. Feb 12 20:24:14.974741 systemd[1]: Started sshd@2-172.31.16.103:22-147.75.109.163:45984.service. Feb 12 20:24:15.151539 sshd[2034]: Accepted publickey for core from 147.75.109.163 port 45984 ssh2: RSA SHA256:ecUhSIJgyplxxRcBUTSxTp+B0aPr5wgDdA3tvIID0Hc Feb 12 20:24:15.153118 sshd[2034]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:24:15.161108 systemd-logind[1787]: New session 3 of user core. Feb 12 20:24:15.161949 systemd[1]: Started session-3.scope. Feb 12 20:24:15.284212 sshd[2034]: pam_unix(sshd:session): session closed for user core Feb 12 20:24:15.289703 systemd[1]: sshd@2-172.31.16.103:22-147.75.109.163:45984.service: Deactivated successfully. Feb 12 20:24:15.291079 systemd[1]: session-3.scope: Deactivated successfully. Feb 12 20:24:15.292360 systemd-logind[1787]: Session 3 logged out. Waiting for processes to exit. Feb 12 20:24:15.295734 systemd-logind[1787]: Removed session 3. Feb 12 20:24:15.310866 systemd[1]: Started sshd@3-172.31.16.103:22-147.75.109.163:46000.service. Feb 12 20:24:15.489168 sshd[2041]: Accepted publickey for core from 147.75.109.163 port 46000 ssh2: RSA SHA256:ecUhSIJgyplxxRcBUTSxTp+B0aPr5wgDdA3tvIID0Hc Feb 12 20:24:15.492182 sshd[2041]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:24:15.500604 systemd[1]: Started session-4.scope. Feb 12 20:24:15.501509 systemd-logind[1787]: New session 4 of user core. Feb 12 20:24:15.636622 sshd[2041]: pam_unix(sshd:session): session closed for user core Feb 12 20:24:15.640778 systemd[1]: sshd@3-172.31.16.103:22-147.75.109.163:46000.service: Deactivated successfully. Feb 12 20:24:15.642162 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 20:24:15.644432 systemd-logind[1787]: Session 4 logged out. Waiting for processes to exit. Feb 12 20:24:15.646030 systemd-logind[1787]: Removed session 4. Feb 12 20:24:15.662311 systemd[1]: Started sshd@4-172.31.16.103:22-147.75.109.163:46010.service. Feb 12 20:24:15.839702 sshd[2048]: Accepted publickey for core from 147.75.109.163 port 46010 ssh2: RSA SHA256:ecUhSIJgyplxxRcBUTSxTp+B0aPr5wgDdA3tvIID0Hc Feb 12 20:24:15.842652 sshd[2048]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:24:15.851073 systemd[1]: Started session-5.scope. Feb 12 20:24:15.851658 systemd-logind[1787]: New session 5 of user core. Feb 12 20:24:15.983820 sudo[2052]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 12 20:24:15.984878 sudo[2052]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 20:24:16.000233 dbus-daemon[1776]: avc: received setenforce notice (enforcing=1) Feb 12 20:24:16.002874 sudo[2052]: pam_unix(sudo:session): session closed for user root Feb 12 20:24:16.028380 sshd[2048]: pam_unix(sshd:session): session closed for user core Feb 12 20:24:16.033723 systemd-logind[1787]: Session 5 logged out. Waiting for processes to exit. Feb 12 20:24:16.035645 systemd[1]: sshd@4-172.31.16.103:22-147.75.109.163:46010.service: Deactivated successfully. Feb 12 20:24:16.037215 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 20:24:16.038790 systemd-logind[1787]: Removed session 5. Feb 12 20:24:16.052989 systemd[1]: Started sshd@5-172.31.16.103:22-147.75.109.163:46026.service. Feb 12 20:24:16.227112 sshd[2056]: Accepted publickey for core from 147.75.109.163 port 46026 ssh2: RSA SHA256:ecUhSIJgyplxxRcBUTSxTp+B0aPr5wgDdA3tvIID0Hc Feb 12 20:24:16.230170 sshd[2056]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:24:16.238638 systemd[1]: Started session-6.scope. Feb 12 20:24:16.239865 systemd-logind[1787]: New session 6 of user core. Feb 12 20:24:16.349000 sudo[2061]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 12 20:24:16.349496 sudo[2061]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 20:24:16.354803 sudo[2061]: pam_unix(sudo:session): session closed for user root Feb 12 20:24:16.364080 sudo[2060]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 12 20:24:16.365084 sudo[2060]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 20:24:16.382923 systemd[1]: Stopping audit-rules.service... Feb 12 20:24:16.383000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 12 20:24:16.386985 kernel: kauditd_printk_skb: 37 callbacks suppressed Feb 12 20:24:16.387047 kernel: audit: type=1305 audit(1707769456.383:128): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 12 20:24:16.388290 auditctl[2064]: No rules Feb 12 20:24:16.392712 systemd[1]: audit-rules.service: Deactivated successfully. Feb 12 20:24:16.383000 audit[2064]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc437e020 a2=420 a3=0 items=0 ppid=1 pid=2064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:16.393277 systemd[1]: Stopped audit-rules.service. Feb 12 20:24:16.397131 systemd[1]: Starting audit-rules.service... Feb 12 20:24:16.383000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Feb 12 20:24:16.414209 kernel: audit: type=1300 audit(1707769456.383:128): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc437e020 a2=420 a3=0 items=0 ppid=1 pid=2064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:16.414310 kernel: audit: type=1327 audit(1707769456.383:128): proctitle=2F7362696E2F617564697463746C002D44 Feb 12 20:24:16.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:16.426769 kernel: audit: type=1131 audit(1707769456.392:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:16.443288 augenrules[2082]: No rules Feb 12 20:24:16.445011 systemd[1]: Finished audit-rules.service. Feb 12 20:24:16.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:16.455081 sudo[2060]: pam_unix(sudo:session): session closed for user root Feb 12 20:24:16.454000 audit[2060]: USER_END pid=2060 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 20:24:16.464613 kernel: audit: type=1130 audit(1707769456.444:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:16.464709 kernel: audit: type=1106 audit(1707769456.454:131): pid=2060 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 20:24:16.454000 audit[2060]: CRED_DISP pid=2060 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 20:24:16.473163 kernel: audit: type=1104 audit(1707769456.454:132): pid=2060 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 20:24:16.479283 sshd[2056]: pam_unix(sshd:session): session closed for user core Feb 12 20:24:16.481000 audit[2056]: USER_END pid=2056 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:16.485175 systemd[1]: sshd@5-172.31.16.103:22-147.75.109.163:46026.service: Deactivated successfully. Feb 12 20:24:16.486472 systemd[1]: session-6.scope: Deactivated successfully. Feb 12 20:24:16.481000 audit[2056]: CRED_DISP pid=2056 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:16.496132 kernel: audit: type=1106 audit(1707769456.481:133): pid=2056 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:16.495518 systemd-logind[1787]: Session 6 logged out. Waiting for processes to exit. Feb 12 20:24:16.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.16.103:22-147.75.109.163:46026 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:16.510356 systemd-logind[1787]: Removed session 6. Feb 12 20:24:16.511733 systemd[1]: Started sshd@6-172.31.16.103:22-147.75.109.163:46034.service. Feb 12 20:24:16.517360 kernel: audit: type=1104 audit(1707769456.481:134): pid=2056 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:16.517470 kernel: audit: type=1131 audit(1707769456.484:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.16.103:22-147.75.109.163:46026 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:16.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.16.103:22-147.75.109.163:46034 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:16.685000 audit[2089]: USER_ACCT pid=2089 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:16.687210 sshd[2089]: Accepted publickey for core from 147.75.109.163 port 46034 ssh2: RSA SHA256:ecUhSIJgyplxxRcBUTSxTp+B0aPr5wgDdA3tvIID0Hc Feb 12 20:24:16.687000 audit[2089]: CRED_ACQ pid=2089 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:16.688000 audit[2089]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff71e2b20 a2=3 a3=1 items=0 ppid=1 pid=2089 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:16.688000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 20:24:16.690147 sshd[2089]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:24:16.697054 systemd-logind[1787]: New session 7 of user core. Feb 12 20:24:16.698866 systemd[1]: Started session-7.scope. Feb 12 20:24:16.708000 audit[2089]: USER_START pid=2089 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:16.711000 audit[2092]: CRED_ACQ pid=2092 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:16.806000 audit[2093]: USER_ACCT pid=2093 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 20:24:16.808212 sudo[2093]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 20:24:16.807000 audit[2093]: CRED_REFR pid=2093 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 20:24:16.808762 sudo[2093]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 20:24:16.811000 audit[2093]: USER_START pid=2093 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 20:24:17.460916 systemd[1]: Reloading. Feb 12 20:24:17.588950 /usr/lib/systemd/system-generators/torcx-generator[2125]: time="2024-02-12T20:24:17Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:24:17.602500 /usr/lib/systemd/system-generators/torcx-generator[2125]: time="2024-02-12T20:24:17Z" level=info msg="torcx already run" Feb 12 20:24:17.757879 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:24:17.757919 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:24:17.800822 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:24:18.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:18.012049 systemd[1]: Started kubelet.service. Feb 12 20:24:18.037446 systemd[1]: Starting coreos-metadata.service... Feb 12 20:24:18.156329 kubelet[2183]: E0212 20:24:18.156232 2183 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 20:24:18.160943 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 20:24:18.161375 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 20:24:18.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 12 20:24:18.217546 coreos-metadata[2191]: Feb 12 20:24:18.217 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 12 20:24:18.218870 coreos-metadata[2191]: Feb 12 20:24:18.218 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-id: Attempt #1 Feb 12 20:24:18.219491 coreos-metadata[2191]: Feb 12 20:24:18.219 INFO Fetch successful Feb 12 20:24:18.219759 coreos-metadata[2191]: Feb 12 20:24:18.219 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-type: Attempt #1 Feb 12 20:24:18.220228 coreos-metadata[2191]: Feb 12 20:24:18.220 INFO Fetch successful Feb 12 20:24:18.220521 coreos-metadata[2191]: Feb 12 20:24:18.220 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/local-ipv4: Attempt #1 Feb 12 20:24:18.220836 coreos-metadata[2191]: Feb 12 20:24:18.220 INFO Fetch successful Feb 12 20:24:18.221111 coreos-metadata[2191]: Feb 12 20:24:18.220 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-ipv4: Attempt #1 Feb 12 20:24:18.221424 coreos-metadata[2191]: Feb 12 20:24:18.221 INFO Fetch successful Feb 12 20:24:18.221677 coreos-metadata[2191]: Feb 12 20:24:18.221 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/placement/availability-zone: Attempt #1 Feb 12 20:24:18.222024 coreos-metadata[2191]: Feb 12 20:24:18.221 INFO Fetch successful Feb 12 20:24:18.222279 coreos-metadata[2191]: Feb 12 20:24:18.222 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/hostname: Attempt #1 Feb 12 20:24:18.222587 coreos-metadata[2191]: Feb 12 20:24:18.222 INFO Fetch successful Feb 12 20:24:18.222837 coreos-metadata[2191]: Feb 12 20:24:18.222 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-hostname: Attempt #1 Feb 12 20:24:18.223178 coreos-metadata[2191]: Feb 12 20:24:18.222 INFO Fetch successful Feb 12 20:24:18.223679 coreos-metadata[2191]: Feb 12 20:24:18.223 INFO Fetching http://169.254.169.254/2019-10-01/dynamic/instance-identity/document: Attempt #1 Feb 12 20:24:18.223949 coreos-metadata[2191]: Feb 12 20:24:18.223 INFO Fetch successful Feb 12 20:24:18.240269 systemd[1]: Finished coreos-metadata.service. Feb 12 20:24:18.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:19.079108 systemd[1]: Stopped kubelet.service. Feb 12 20:24:19.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:19.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:19.109776 systemd[1]: Reloading. Feb 12 20:24:19.231092 /usr/lib/systemd/system-generators/torcx-generator[2253]: time="2024-02-12T20:24:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:24:19.231157 /usr/lib/systemd/system-generators/torcx-generator[2253]: time="2024-02-12T20:24:19Z" level=info msg="torcx already run" Feb 12 20:24:19.420600 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:24:19.420642 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:24:19.465109 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:24:19.684030 systemd[1]: Started kubelet.service. Feb 12 20:24:19.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:19.785324 kubelet[2315]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 20:24:19.785324 kubelet[2315]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:24:19.785890 kubelet[2315]: I0212 20:24:19.785461 2315 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 20:24:19.787910 kubelet[2315]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 20:24:19.787910 kubelet[2315]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:24:20.622474 kubelet[2315]: I0212 20:24:20.622435 2315 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 12 20:24:20.622675 kubelet[2315]: I0212 20:24:20.622644 2315 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 20:24:20.623225 kubelet[2315]: I0212 20:24:20.623200 2315 server.go:836] "Client rotation is on, will bootstrap in background" Feb 12 20:24:20.628460 kubelet[2315]: I0212 20:24:20.628421 2315 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 20:24:20.632512 kubelet[2315]: W0212 20:24:20.632459 2315 machine.go:65] Cannot read vendor id correctly, set empty. Feb 12 20:24:20.636093 kubelet[2315]: I0212 20:24:20.636039 2315 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 20:24:20.636937 kubelet[2315]: I0212 20:24:20.636895 2315 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 20:24:20.637111 kubelet[2315]: I0212 20:24:20.637078 2315 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 20:24:20.637280 kubelet[2315]: I0212 20:24:20.637143 2315 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 20:24:20.637280 kubelet[2315]: I0212 20:24:20.637170 2315 container_manager_linux.go:308] "Creating device plugin manager" Feb 12 20:24:20.637418 kubelet[2315]: I0212 20:24:20.637353 2315 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:24:20.642922 kubelet[2315]: I0212 20:24:20.642890 2315 kubelet.go:398] "Attempting to sync node with API server" Feb 12 20:24:20.643146 kubelet[2315]: I0212 20:24:20.643124 2315 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 20:24:20.643361 kubelet[2315]: I0212 20:24:20.643338 2315 kubelet.go:297] "Adding apiserver pod source" Feb 12 20:24:20.643505 kubelet[2315]: I0212 20:24:20.643480 2315 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 20:24:20.644445 kubelet[2315]: E0212 20:24:20.644416 2315 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:20.644653 kubelet[2315]: E0212 20:24:20.644631 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:20.645307 kubelet[2315]: I0212 20:24:20.645260 2315 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 20:24:20.646282 kubelet[2315]: W0212 20:24:20.646234 2315 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 20:24:20.647134 kubelet[2315]: I0212 20:24:20.647090 2315 server.go:1186] "Started kubelet" Feb 12 20:24:20.649172 kubelet[2315]: I0212 20:24:20.649125 2315 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 20:24:20.650314 kubelet[2315]: I0212 20:24:20.650263 2315 server.go:451] "Adding debug handlers to kubelet server" Feb 12 20:24:20.649000 audit[2315]: AVC avc: denied { mac_admin } for pid=2315 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:24:20.649000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 12 20:24:20.649000 audit[2315]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000679cb0 a1=40007d9458 a2=4000679c80 a3=25 items=0 ppid=1 pid=2315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:20.649000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 12 20:24:20.651384 kubelet[2315]: I0212 20:24:20.651352 2315 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 12 20:24:20.650000 audit[2315]: AVC avc: denied { mac_admin } for pid=2315 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:24:20.650000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 12 20:24:20.650000 audit[2315]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000edadc0 a1=40007d9470 a2=4000679d40 a3=25 items=0 ppid=1 pid=2315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:20.650000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 12 20:24:20.652056 kubelet[2315]: I0212 20:24:20.652028 2315 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 12 20:24:20.652320 kubelet[2315]: I0212 20:24:20.652299 2315 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 20:24:20.652727 kubelet[2315]: E0212 20:24:20.652649 2315 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 20:24:20.652727 kubelet[2315]: E0212 20:24:20.652717 2315 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 20:24:20.662736 kubelet[2315]: E0212 20:24:20.662580 2315 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.103.17b33746bc20a669", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.103", UID:"172.31.16.103", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.103"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 20, 647044713, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 20, 647044713, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:24:20.663042 kubelet[2315]: W0212 20:24:20.663006 2315 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.31.16.103" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 20:24:20.663153 kubelet[2315]: E0212 20:24:20.663066 2315 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.16.103" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 20:24:20.663153 kubelet[2315]: W0212 20:24:20.663136 2315 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 20:24:20.663269 kubelet[2315]: E0212 20:24:20.663160 2315 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 20:24:20.664878 kubelet[2315]: E0212 20:24:20.664837 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:20.665042 kubelet[2315]: I0212 20:24:20.664915 2315 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 12 20:24:20.665115 kubelet[2315]: I0212 20:24:20.665055 2315 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 20:24:20.679642 kubelet[2315]: E0212 20:24:20.679429 2315 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.103.17b33746bc76f04d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.103", UID:"172.31.16.103", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.103"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 20, 652699725, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 20, 652699725, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:24:20.680125 kubelet[2315]: E0212 20:24:20.680097 2315 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "172.31.16.103" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 20:24:20.680346 kubelet[2315]: W0212 20:24:20.680301 2315 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 20:24:20.680623 kubelet[2315]: E0212 20:24:20.680602 2315 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 20:24:20.713000 audit[2329]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=2329 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:24:20.713000 audit[2329]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffdc7ffd40 a2=0 a3=1 items=0 ppid=2315 pid=2329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:20.713000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 12 20:24:20.715000 audit[2333]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2333 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:24:20.715000 audit[2333]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=132 a0=3 a1=fffff50b3a40 a2=0 a3=1 items=0 ppid=2315 pid=2333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:20.715000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 12 20:24:20.720000 audit[2335]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=2335 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:24:20.720000 audit[2335]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffcad62620 a2=0 a3=1 items=0 ppid=2315 pid=2335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:20.720000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 12 20:24:20.747000 audit[2340]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=2340 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:24:20.747000 audit[2340]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc21f7830 a2=0 a3=1 items=0 ppid=2315 pid=2340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:20.747000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 12 20:24:20.770437 kubelet[2315]: I0212 20:24:20.770377 2315 kubelet_node_status.go:70] "Attempting to register node" node="172.31.16.103" Feb 12 20:24:20.772666 kubelet[2315]: E0212 20:24:20.772608 2315 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.16.103" Feb 12 20:24:20.775498 kubelet[2315]: E0212 20:24:20.774319 2315 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.103.17b33746c379ab0d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.103", UID:"172.31.16.103", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.16.103 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.103"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 20, 770319117, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 20, 770319117, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:24:20.776927 kubelet[2315]: E0212 20:24:20.776764 2315 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.103.17b33746c379ce65", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.103", UID:"172.31.16.103", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.16.103 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.103"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 20, 770328165, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 20, 770328165, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:24:20.782488 kubelet[2315]: E0212 20:24:20.778320 2315 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.103.17b33746c379e569", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.103", UID:"172.31.16.103", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.16.103 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.103"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 20, 770334057, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 20, 770334057, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:24:20.782488 kubelet[2315]: I0212 20:24:20.781366 2315 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 20:24:20.782488 kubelet[2315]: I0212 20:24:20.781397 2315 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 20:24:20.782488 kubelet[2315]: I0212 20:24:20.781424 2315 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:24:20.788311 kubelet[2315]: E0212 20:24:20.783110 2315 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.103.17b33746c379ab0d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.103", UID:"172.31.16.103", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.16.103 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.103"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 20, 770319117, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 20, 779033469, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.16.103.17b33746c379ab0d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:24:20.788458 kubelet[2315]: E0212 20:24:20.784476 2315 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.103.17b33746c379ce65", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.103", UID:"172.31.16.103", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.16.103 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.103"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 20, 770328165, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 20, 779042061, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.16.103.17b33746c379ce65" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:24:20.788458 kubelet[2315]: I0212 20:24:20.786364 2315 policy_none.go:49] "None policy: Start" Feb 12 20:24:20.788458 kubelet[2315]: I0212 20:24:20.788046 2315 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 20:24:20.788458 kubelet[2315]: I0212 20:24:20.788097 2315 state_mem.go:35] "Initializing new in-memory state store" Feb 12 20:24:20.789129 kubelet[2315]: E0212 20:24:20.788983 2315 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.103.17b33746c379e569", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.103", UID:"172.31.16.103", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.16.103 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.103"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 20, 770334057, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 20, 779047065, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.16.103.17b33746c379e569" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:24:20.815533 kubelet[2315]: I0212 20:24:20.815478 2315 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 20:24:20.815685 kubelet[2315]: I0212 20:24:20.815603 2315 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 12 20:24:20.814000 audit[2315]: AVC avc: denied { mac_admin } for pid=2315 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:24:20.814000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 12 20:24:20.814000 audit[2315]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000d3f3b0 a1=4000d362b8 a2=4000d3f380 a3=25 items=0 ppid=1 pid=2315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:20.814000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 12 20:24:20.817048 kubelet[2315]: I0212 20:24:20.815978 2315 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 20:24:20.821347 kubelet[2315]: E0212 20:24:20.821307 2315 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.16.103\" not found" Feb 12 20:24:20.821536 kubelet[2315]: E0212 20:24:20.821425 2315 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.103.17b33746c6666c42", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.103", UID:"172.31.16.103", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.103"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 20, 819389506, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 20, 819389506, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:24:20.823000 audit[2345]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=2345 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:24:20.823000 audit[2345]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffcf6efeb0 a2=0 a3=1 items=0 ppid=2315 pid=2345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:20.823000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Feb 12 20:24:20.826000 audit[2346]: NETFILTER_CFG table=nat:7 family=2 entries=2 op=nft_register_chain pid=2346 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:24:20.826000 audit[2346]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=fffff678c690 a2=0 a3=1 items=0 ppid=2315 pid=2346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:20.826000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 12 20:24:20.836000 audit[2349]: NETFILTER_CFG table=nat:8 family=2 entries=1 op=nft_register_rule pid=2349 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:24:20.836000 audit[2349]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffd9a15ee0 a2=0 a3=1 items=0 ppid=2315 pid=2349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:20.836000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 12 20:24:20.844000 audit[2352]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=2352 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:24:20.844000 audit[2352]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=ffffe9619ee0 a2=0 a3=1 items=0 ppid=2315 pid=2352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:20.844000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 12 20:24:20.847000 audit[2353]: NETFILTER_CFG table=nat:10 family=2 entries=1 op=nft_register_chain pid=2353 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:24:20.847000 audit[2353]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd7e8cea0 a2=0 a3=1 items=0 ppid=2315 pid=2353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:20.847000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 12 20:24:20.849000 audit[2354]: NETFILTER_CFG table=nat:11 family=2 entries=1 op=nft_register_chain pid=2354 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:24:20.849000 audit[2354]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe34b2560 a2=0 a3=1 items=0 ppid=2315 pid=2354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:20.849000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 12 20:24:20.853000 audit[2356]: NETFILTER_CFG table=nat:12 family=2 entries=1 op=nft_register_rule pid=2356 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:24:20.853000 audit[2356]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffd60ac9d0 a2=0 a3=1 items=0 ppid=2315 pid=2356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:20.853000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 12 20:24:20.882803 kubelet[2315]: E0212 20:24:20.882678 2315 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "172.31.16.103" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 20:24:20.858000 audit[2358]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2358 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:24:20.858000 audit[2358]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=fffffd35ef20 a2=0 a3=1 items=0 ppid=2315 pid=2358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:20.858000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 12 20:24:20.892000 audit[2361]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=2361 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:24:20.892000 audit[2361]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=ffffc4f1afc0 a2=0 a3=1 items=0 ppid=2315 pid=2361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:20.892000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 12 20:24:20.897000 audit[2363]: NETFILTER_CFG table=nat:15 family=2 entries=1 op=nft_register_rule pid=2363 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:24:20.897000 audit[2363]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=ffffcfa142f0 a2=0 a3=1 items=0 ppid=2315 pid=2363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:20.897000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 12 20:24:20.910000 audit[2366]: NETFILTER_CFG table=nat:16 family=2 entries=1 op=nft_register_rule pid=2366 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:24:20.910000 audit[2366]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=540 a0=3 a1=fffffe1caf80 a2=0 a3=1 items=0 ppid=2315 pid=2366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:20.910000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 12 20:24:20.911881 kubelet[2315]: I0212 20:24:20.911852 2315 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 20:24:20.912000 audit[2367]: NETFILTER_CFG table=mangle:17 family=10 entries=2 op=nft_register_chain pid=2367 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:24:20.912000 audit[2367]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd0cdbcf0 a2=0 a3=1 items=0 ppid=2315 pid=2367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:20.912000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 12 20:24:20.913000 audit[2368]: NETFILTER_CFG table=mangle:18 family=2 entries=1 op=nft_register_chain pid=2368 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:24:20.913000 audit[2368]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcdcb1500 a2=0 a3=1 items=0 ppid=2315 pid=2368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:20.913000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 12 20:24:20.915000 audit[2369]: NETFILTER_CFG table=nat:19 family=2 entries=1 op=nft_register_chain pid=2369 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:24:20.915000 audit[2369]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff56e1db0 a2=0 a3=1 items=0 ppid=2315 pid=2369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:20.915000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 12 20:24:20.916000 audit[2370]: NETFILTER_CFG table=nat:20 family=10 entries=2 op=nft_register_chain pid=2370 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:24:20.916000 audit[2370]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffff563cc0 a2=0 a3=1 items=0 ppid=2315 pid=2370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:20.916000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 12 20:24:20.916000 audit[2371]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_chain pid=2371 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:24:20.916000 audit[2371]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd0ef5d30 a2=0 a3=1 items=0 ppid=2315 pid=2371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:20.916000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 12 20:24:20.921000 audit[2373]: NETFILTER_CFG table=nat:22 family=10 entries=1 op=nft_register_rule pid=2373 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:24:20.921000 audit[2373]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffcb6862b0 a2=0 a3=1 items=0 ppid=2315 pid=2373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:20.921000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 12 20:24:20.923000 audit[2374]: NETFILTER_CFG table=filter:23 family=10 entries=2 op=nft_register_chain pid=2374 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:24:20.923000 audit[2374]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=132 a0=3 a1=ffffe335ff10 a2=0 a3=1 items=0 ppid=2315 pid=2374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:20.923000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 12 20:24:20.928000 audit[2376]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_rule pid=2376 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:24:20.928000 audit[2376]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=fffff91939d0 a2=0 a3=1 items=0 ppid=2315 pid=2376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:20.928000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 12 20:24:20.930000 audit[2377]: NETFILTER_CFG table=nat:25 family=10 entries=1 op=nft_register_chain pid=2377 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:24:20.930000 audit[2377]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffdf1f0940 a2=0 a3=1 items=0 ppid=2315 pid=2377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:20.930000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 12 20:24:20.932000 audit[2378]: NETFILTER_CFG table=nat:26 family=10 entries=1 op=nft_register_chain pid=2378 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:24:20.932000 audit[2378]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd5ebd2c0 a2=0 a3=1 items=0 ppid=2315 pid=2378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:20.932000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 12 20:24:20.936000 audit[2380]: NETFILTER_CFG table=nat:27 family=10 entries=1 op=nft_register_rule pid=2380 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:24:20.936000 audit[2380]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffdcc70ef0 a2=0 a3=1 items=0 ppid=2315 pid=2380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:20.936000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 12 20:24:20.940000 audit[2382]: NETFILTER_CFG table=nat:28 family=10 entries=2 op=nft_register_chain pid=2382 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:24:20.940000 audit[2382]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffe4423be0 a2=0 a3=1 items=0 ppid=2315 pid=2382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:20.940000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 12 20:24:20.944000 audit[2384]: NETFILTER_CFG table=nat:29 family=10 entries=1 op=nft_register_rule pid=2384 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:24:20.944000 audit[2384]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=fffff64df050 a2=0 a3=1 items=0 ppid=2315 pid=2384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:20.944000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 12 20:24:20.949000 audit[2386]: NETFILTER_CFG table=nat:30 family=10 entries=1 op=nft_register_rule pid=2386 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:24:20.949000 audit[2386]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=fffff33463b0 a2=0 a3=1 items=0 ppid=2315 pid=2386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:20.949000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 12 20:24:20.955000 audit[2388]: NETFILTER_CFG table=nat:31 family=10 entries=1 op=nft_register_rule pid=2388 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:24:20.955000 audit[2388]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=556 a0=3 a1=ffffdccec060 a2=0 a3=1 items=0 ppid=2315 pid=2388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:20.955000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 12 20:24:20.957035 kubelet[2315]: I0212 20:24:20.956988 2315 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 20:24:20.957184 kubelet[2315]: I0212 20:24:20.957162 2315 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 12 20:24:20.957307 kubelet[2315]: I0212 20:24:20.957286 2315 kubelet.go:2113] "Starting kubelet main sync loop" Feb 12 20:24:20.957495 kubelet[2315]: E0212 20:24:20.957474 2315 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 12 20:24:20.959625 kubelet[2315]: W0212 20:24:20.959588 2315 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 20:24:20.959825 kubelet[2315]: E0212 20:24:20.959802 2315 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 20:24:20.958000 audit[2389]: NETFILTER_CFG table=mangle:32 family=10 entries=1 op=nft_register_chain pid=2389 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:24:20.958000 audit[2389]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffee2ba790 a2=0 a3=1 items=0 ppid=2315 pid=2389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:20.958000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 12 20:24:20.960000 audit[2390]: NETFILTER_CFG table=nat:33 family=10 entries=1 op=nft_register_chain pid=2390 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:24:20.960000 audit[2390]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc16b2a20 a2=0 a3=1 items=0 ppid=2315 pid=2390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:20.960000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 12 20:24:20.962000 audit[2391]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=2391 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:24:20.962000 audit[2391]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff8d1ef10 a2=0 a3=1 items=0 ppid=2315 pid=2391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:20.962000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 12 20:24:20.974151 kubelet[2315]: I0212 20:24:20.974119 2315 kubelet_node_status.go:70] "Attempting to register node" node="172.31.16.103" Feb 12 20:24:20.980280 kubelet[2315]: E0212 20:24:20.980249 2315 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.16.103" Feb 12 20:24:20.980598 kubelet[2315]: E0212 20:24:20.980494 2315 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.103.17b33746c379ab0d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.103", UID:"172.31.16.103", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.16.103 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.103"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 20, 770319117, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 20, 974070310, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.16.103.17b33746c379ab0d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:24:20.984151 kubelet[2315]: E0212 20:24:20.984025 2315 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.103.17b33746c379ce65", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.103", UID:"172.31.16.103", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.16.103 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.103"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 20, 770328165, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 20, 974078374, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.16.103.17b33746c379ce65" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:24:21.051376 kubelet[2315]: E0212 20:24:21.051226 2315 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.103.17b33746c379e569", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.103", UID:"172.31.16.103", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.16.103 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.103"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 20, 770334057, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 20, 974083342, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.16.103.17b33746c379e569" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:24:21.285475 kubelet[2315]: E0212 20:24:21.285435 2315 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "172.31.16.103" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 20:24:21.382505 kubelet[2315]: I0212 20:24:21.382445 2315 kubelet_node_status.go:70] "Attempting to register node" node="172.31.16.103" Feb 12 20:24:21.383837 kubelet[2315]: E0212 20:24:21.383777 2315 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.16.103" Feb 12 20:24:21.383837 kubelet[2315]: E0212 20:24:21.383685 2315 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.103.17b33746c379ab0d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.103", UID:"172.31.16.103", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.16.103 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.103"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 20, 770319117, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 21, 381844280, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.16.103.17b33746c379ab0d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:24:21.451378 kubelet[2315]: E0212 20:24:21.451252 2315 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.103.17b33746c379ce65", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.103", UID:"172.31.16.103", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.16.103 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.103"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 20, 770328165, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 21, 381869168, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.16.103.17b33746c379ce65" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:24:21.574621 kubelet[2315]: W0212 20:24:21.574018 2315 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.31.16.103" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 20:24:21.574621 kubelet[2315]: E0212 20:24:21.574099 2315 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.16.103" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 20:24:21.645447 kubelet[2315]: E0212 20:24:21.645398 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:21.651168 kubelet[2315]: E0212 20:24:21.651013 2315 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.103.17b33746c379e569", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.103", UID:"172.31.16.103", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.16.103 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.103"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 20, 770334057, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 21, 381874880, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.16.103.17b33746c379e569" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:24:21.927735 kubelet[2315]: W0212 20:24:21.927220 2315 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 20:24:21.927735 kubelet[2315]: E0212 20:24:21.927300 2315 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 20:24:22.088023 kubelet[2315]: E0212 20:24:22.087943 2315 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "172.31.16.103" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 20:24:22.092835 kubelet[2315]: W0212 20:24:22.092771 2315 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 20:24:22.093126 kubelet[2315]: E0212 20:24:22.093100 2315 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 20:24:22.098910 kubelet[2315]: W0212 20:24:22.098868 2315 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 20:24:22.099179 kubelet[2315]: E0212 20:24:22.099156 2315 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 20:24:22.186596 kubelet[2315]: I0212 20:24:22.186080 2315 kubelet_node_status.go:70] "Attempting to register node" node="172.31.16.103" Feb 12 20:24:22.190695 kubelet[2315]: E0212 20:24:22.190441 2315 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.16.103" Feb 12 20:24:22.190695 kubelet[2315]: E0212 20:24:22.190545 2315 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.103.17b33746c379ab0d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.103", UID:"172.31.16.103", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.16.103 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.103"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 20, 770319117, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 22, 185953328, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.16.103.17b33746c379ab0d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:24:22.193460 kubelet[2315]: E0212 20:24:22.193252 2315 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.103.17b33746c379ce65", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.103", UID:"172.31.16.103", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.16.103 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.103"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 20, 770328165, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 22, 186008276, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.16.103.17b33746c379ce65" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:24:22.251407 kubelet[2315]: E0212 20:24:22.251283 2315 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.103.17b33746c379e569", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.103", UID:"172.31.16.103", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.16.103 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.103"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 20, 770334057, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 22, 186021068, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.16.103.17b33746c379e569" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:24:22.646047 kubelet[2315]: E0212 20:24:22.645995 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:23.646630 kubelet[2315]: E0212 20:24:23.646562 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:23.689627 kubelet[2315]: E0212 20:24:23.689589 2315 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "172.31.16.103" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 20:24:23.792603 kubelet[2315]: I0212 20:24:23.792562 2315 kubelet_node_status.go:70] "Attempting to register node" node="172.31.16.103" Feb 12 20:24:23.793708 kubelet[2315]: E0212 20:24:23.793596 2315 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.103.17b33746c379ab0d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.103", UID:"172.31.16.103", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.16.103 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.103"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 20, 770319117, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 23, 791936484, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.16.103.17b33746c379ab0d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:24:23.794929 kubelet[2315]: E0212 20:24:23.794897 2315 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.16.103" Feb 12 20:24:23.797810 kubelet[2315]: E0212 20:24:23.797672 2315 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.103.17b33746c379ce65", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.103", UID:"172.31.16.103", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.16.103 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.103"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 20, 770328165, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 23, 791943900, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.16.103.17b33746c379ce65" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:24:23.799358 kubelet[2315]: E0212 20:24:23.799254 2315 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.103.17b33746c379e569", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.103", UID:"172.31.16.103", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.16.103 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.103"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 20, 770334057, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 23, 791952756, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.16.103.17b33746c379e569" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:24:24.246788 kubelet[2315]: W0212 20:24:24.246733 2315 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 20:24:24.247145 kubelet[2315]: E0212 20:24:24.247101 2315 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 20:24:24.646934 kubelet[2315]: E0212 20:24:24.646770 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:24.766743 kubelet[2315]: W0212 20:24:24.766695 2315 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.31.16.103" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 20:24:24.766743 kubelet[2315]: E0212 20:24:24.766746 2315 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.16.103" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 20:24:24.798991 kubelet[2315]: W0212 20:24:24.798929 2315 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 20:24:24.799214 kubelet[2315]: E0212 20:24:24.799193 2315 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 20:24:25.151681 kubelet[2315]: W0212 20:24:25.151643 2315 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 20:24:25.151952 kubelet[2315]: E0212 20:24:25.151930 2315 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 20:24:25.647442 kubelet[2315]: E0212 20:24:25.647397 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:26.649211 kubelet[2315]: E0212 20:24:26.649166 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:26.895755 kubelet[2315]: E0212 20:24:26.895704 2315 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "172.31.16.103" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 20:24:26.996049 kubelet[2315]: I0212 20:24:26.996012 2315 kubelet_node_status.go:70] "Attempting to register node" node="172.31.16.103" Feb 12 20:24:26.997724 kubelet[2315]: E0212 20:24:26.997684 2315 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.16.103" Feb 12 20:24:26.999260 kubelet[2315]: E0212 20:24:26.999129 2315 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.103.17b33746c379ab0d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.103", UID:"172.31.16.103", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.16.103 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.103"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 20, 770319117, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 26, 995916520, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.16.103.17b33746c379ab0d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:24:27.001283 kubelet[2315]: E0212 20:24:27.001163 2315 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.103.17b33746c379ce65", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.103", UID:"172.31.16.103", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.16.103 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.103"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 20, 770328165, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 26, 995925280, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.16.103.17b33746c379ce65" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:24:27.004351 kubelet[2315]: E0212 20:24:27.004222 2315 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.103.17b33746c379e569", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.103", UID:"172.31.16.103", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.16.103 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.103"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 20, 770334057, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 26, 995930344, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.16.103.17b33746c379e569" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 20:24:27.650397 kubelet[2315]: E0212 20:24:27.650349 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:28.074100 kubelet[2315]: W0212 20:24:28.074059 2315 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 20:24:28.074338 kubelet[2315]: E0212 20:24:28.074314 2315 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 20:24:28.548451 kubelet[2315]: W0212 20:24:28.548401 2315 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.31.16.103" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 20:24:28.548703 kubelet[2315]: E0212 20:24:28.548673 2315 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.16.103" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 20:24:28.651543 kubelet[2315]: E0212 20:24:28.651469 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:29.071358 kubelet[2315]: W0212 20:24:29.071294 2315 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 20:24:29.071358 kubelet[2315]: E0212 20:24:29.071354 2315 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 20:24:29.652185 kubelet[2315]: E0212 20:24:29.652119 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:30.089511 kubelet[2315]: W0212 20:24:30.089472 2315 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 20:24:30.089726 kubelet[2315]: E0212 20:24:30.089704 2315 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 20:24:30.626647 kubelet[2315]: I0212 20:24:30.626572 2315 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 12 20:24:30.652458 kubelet[2315]: E0212 20:24:30.652410 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:30.822299 kubelet[2315]: E0212 20:24:30.822241 2315 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.16.103\" not found" Feb 12 20:24:31.027587 kubelet[2315]: E0212 20:24:31.027518 2315 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.31.16.103" not found Feb 12 20:24:31.653622 kubelet[2315]: E0212 20:24:31.653547 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:32.066151 kubelet[2315]: E0212 20:24:32.066117 2315 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.31.16.103" not found Feb 12 20:24:32.654723 kubelet[2315]: E0212 20:24:32.654660 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:33.302356 kubelet[2315]: E0212 20:24:33.302298 2315 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.16.103\" not found" node="172.31.16.103" Feb 12 20:24:33.398845 kubelet[2315]: I0212 20:24:33.398789 2315 kubelet_node_status.go:70] "Attempting to register node" node="172.31.16.103" Feb 12 20:24:33.468132 kubelet[2315]: I0212 20:24:33.468057 2315 kubelet_node_status.go:73] "Successfully registered node" node="172.31.16.103" Feb 12 20:24:33.484169 kubelet[2315]: E0212 20:24:33.484085 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:33.584917 kubelet[2315]: E0212 20:24:33.584749 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:33.655530 kubelet[2315]: E0212 20:24:33.655467 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:33.685868 kubelet[2315]: E0212 20:24:33.685806 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:33.705000 audit[2093]: USER_END pid=2093 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 20:24:33.706311 sudo[2093]: pam_unix(sudo:session): session closed for user root Feb 12 20:24:33.708544 kernel: kauditd_printk_skb: 128 callbacks suppressed Feb 12 20:24:33.708664 kernel: audit: type=1106 audit(1707769473.705:187): pid=2093 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 20:24:33.705000 audit[2093]: CRED_DISP pid=2093 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 20:24:33.726236 kernel: audit: type=1104 audit(1707769473.705:188): pid=2093 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 20:24:33.730317 sshd[2089]: pam_unix(sshd:session): session closed for user core Feb 12 20:24:33.732000 audit[2089]: USER_END pid=2089 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:33.732000 audit[2089]: CRED_DISP pid=2089 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:33.746592 systemd[1]: sshd@6-172.31.16.103:22-147.75.109.163:46034.service: Deactivated successfully. Feb 12 20:24:33.750252 systemd[1]: session-7.scope: Deactivated successfully. Feb 12 20:24:33.755282 kernel: audit: type=1106 audit(1707769473.732:189): pid=2089 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:33.755409 kernel: audit: type=1104 audit(1707769473.732:190): pid=2089 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:33.755415 systemd-logind[1787]: Session 7 logged out. Waiting for processes to exit. Feb 12 20:24:33.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.16.103:22-147.75.109.163:46034 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:33.765158 kernel: audit: type=1131 audit(1707769473.745:191): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.16.103:22-147.75.109.163:46034 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:33.766222 systemd-logind[1787]: Removed session 7. Feb 12 20:24:33.787097 kubelet[2315]: E0212 20:24:33.786952 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:33.887981 kubelet[2315]: E0212 20:24:33.887751 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:33.988656 kubelet[2315]: E0212 20:24:33.988542 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:34.089279 kubelet[2315]: E0212 20:24:34.089157 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:34.189553 kubelet[2315]: E0212 20:24:34.189369 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:34.232335 amazon-ssm-agent[1771]: 2024-02-12 20:24:34 INFO [HealthCheck] HealthCheck reporting agent health. Feb 12 20:24:34.290205 kubelet[2315]: E0212 20:24:34.290136 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:34.390912 kubelet[2315]: E0212 20:24:34.390848 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:34.491752 kubelet[2315]: E0212 20:24:34.491694 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:34.592411 kubelet[2315]: E0212 20:24:34.592350 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:34.656233 kubelet[2315]: E0212 20:24:34.656162 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:34.692558 kubelet[2315]: E0212 20:24:34.692503 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:34.793876 kubelet[2315]: E0212 20:24:34.793298 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:34.894321 kubelet[2315]: E0212 20:24:34.894258 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:34.994701 kubelet[2315]: E0212 20:24:34.994639 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:35.096075 kubelet[2315]: E0212 20:24:35.095440 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:35.196444 kubelet[2315]: E0212 20:24:35.196382 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:35.297189 kubelet[2315]: E0212 20:24:35.297132 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:35.398329 kubelet[2315]: E0212 20:24:35.397836 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:35.498620 kubelet[2315]: E0212 20:24:35.498559 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:35.599319 kubelet[2315]: E0212 20:24:35.599260 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:35.657542 kubelet[2315]: E0212 20:24:35.657043 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:35.699705 kubelet[2315]: E0212 20:24:35.699644 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:35.800400 kubelet[2315]: E0212 20:24:35.800341 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:35.901322 kubelet[2315]: E0212 20:24:35.901253 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:36.001441 kubelet[2315]: E0212 20:24:36.001389 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:36.102229 kubelet[2315]: E0212 20:24:36.102168 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:36.203224 kubelet[2315]: E0212 20:24:36.203165 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:36.304046 kubelet[2315]: E0212 20:24:36.303650 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:36.404189 kubelet[2315]: E0212 20:24:36.404128 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:36.504759 kubelet[2315]: E0212 20:24:36.504699 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:36.605794 kubelet[2315]: E0212 20:24:36.605427 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:36.658151 kubelet[2315]: E0212 20:24:36.658077 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:36.706668 kubelet[2315]: E0212 20:24:36.706581 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:36.807375 kubelet[2315]: E0212 20:24:36.807306 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:36.908742 kubelet[2315]: E0212 20:24:36.908332 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:37.008570 kubelet[2315]: E0212 20:24:37.008504 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:37.109290 kubelet[2315]: E0212 20:24:37.109225 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:37.209769 kubelet[2315]: E0212 20:24:37.209399 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:37.310109 kubelet[2315]: E0212 20:24:37.310061 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:37.411207 kubelet[2315]: E0212 20:24:37.411142 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:37.512061 kubelet[2315]: E0212 20:24:37.512013 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:37.612781 kubelet[2315]: E0212 20:24:37.612714 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:37.658303 kubelet[2315]: E0212 20:24:37.658257 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:37.713992 kubelet[2315]: E0212 20:24:37.713911 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:37.815050 kubelet[2315]: E0212 20:24:37.814628 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:37.915689 kubelet[2315]: E0212 20:24:37.915626 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:38.016862 kubelet[2315]: E0212 20:24:38.016789 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:38.118371 kubelet[2315]: E0212 20:24:38.118004 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:38.218930 kubelet[2315]: E0212 20:24:38.218866 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:38.319165 kubelet[2315]: E0212 20:24:38.319121 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:38.420483 kubelet[2315]: E0212 20:24:38.420129 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:38.521593 kubelet[2315]: E0212 20:24:38.521524 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:38.622228 kubelet[2315]: E0212 20:24:38.622167 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:38.659263 kubelet[2315]: E0212 20:24:38.659197 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:38.722927 kubelet[2315]: E0212 20:24:38.722863 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:38.823704 kubelet[2315]: E0212 20:24:38.823637 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:38.924788 kubelet[2315]: E0212 20:24:38.924727 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:39.025491 kubelet[2315]: E0212 20:24:39.025103 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:39.125751 kubelet[2315]: E0212 20:24:39.125673 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:39.226573 kubelet[2315]: E0212 20:24:39.226488 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:39.327814 kubelet[2315]: E0212 20:24:39.327414 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:39.428090 kubelet[2315]: E0212 20:24:39.428031 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:39.528737 kubelet[2315]: E0212 20:24:39.528696 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:39.629865 kubelet[2315]: E0212 20:24:39.629508 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:39.660057 kubelet[2315]: E0212 20:24:39.660009 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:39.730583 kubelet[2315]: E0212 20:24:39.730538 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:39.831613 kubelet[2315]: E0212 20:24:39.831560 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:39.932894 kubelet[2315]: E0212 20:24:39.932500 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:40.032808 kubelet[2315]: E0212 20:24:40.032771 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:40.133819 kubelet[2315]: E0212 20:24:40.133779 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:40.234665 kubelet[2315]: E0212 20:24:40.234630 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:40.335289 kubelet[2315]: E0212 20:24:40.335242 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:40.436003 kubelet[2315]: E0212 20:24:40.435927 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:40.536878 kubelet[2315]: E0212 20:24:40.536547 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:40.637622 kubelet[2315]: E0212 20:24:40.637579 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:40.643832 kubelet[2315]: E0212 20:24:40.643775 2315 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:40.661415 kubelet[2315]: E0212 20:24:40.661388 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:40.738172 kubelet[2315]: E0212 20:24:40.738127 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:40.823111 kubelet[2315]: E0212 20:24:40.822680 2315 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.16.103\" not found" Feb 12 20:24:40.838949 kubelet[2315]: E0212 20:24:40.838882 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:40.939571 kubelet[2315]: E0212 20:24:40.939510 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:41.040651 kubelet[2315]: E0212 20:24:41.040598 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:41.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:41.115926 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 12 20:24:41.126049 kernel: audit: type=1131 audit(1707769481.115:192): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:41.141502 kubelet[2315]: E0212 20:24:41.141412 2315 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.103\" not found" Feb 12 20:24:41.243373 kubelet[2315]: I0212 20:24:41.243327 2315 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 12 20:24:41.244045 env[1799]: time="2024-02-12T20:24:41.243871347Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 20:24:41.244613 kubelet[2315]: I0212 20:24:41.244283 2315 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 12 20:24:41.658319 kubelet[2315]: I0212 20:24:41.658276 2315 apiserver.go:52] "Watching apiserver" Feb 12 20:24:41.662145 kubelet[2315]: E0212 20:24:41.662073 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:41.663600 kubelet[2315]: I0212 20:24:41.663556 2315 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:24:41.664075 kubelet[2315]: I0212 20:24:41.663932 2315 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:24:41.664751 kubelet[2315]: I0212 20:24:41.664712 2315 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:24:41.667590 kubelet[2315]: E0212 20:24:41.667548 2315 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mbm2f" podUID=025e7f47-e5fc-44a0-ae5d-89a7aa729804 Feb 12 20:24:41.701106 kubelet[2315]: I0212 20:24:41.701046 2315 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8034d7ff-596e-4754-add6-fbf065068070-tigera-ca-bundle\") pod \"calico-node-68npm\" (UID: \"8034d7ff-596e-4754-add6-fbf065068070\") " pod="calico-system/calico-node-68npm" Feb 12 20:24:41.701303 kubelet[2315]: I0212 20:24:41.701154 2315 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8034d7ff-596e-4754-add6-fbf065068070-cni-bin-dir\") pod \"calico-node-68npm\" (UID: \"8034d7ff-596e-4754-add6-fbf065068070\") " pod="calico-system/calico-node-68npm" Feb 12 20:24:41.701303 kubelet[2315]: I0212 20:24:41.701238 2315 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/025e7f47-e5fc-44a0-ae5d-89a7aa729804-varrun\") pod \"csi-node-driver-mbm2f\" (UID: \"025e7f47-e5fc-44a0-ae5d-89a7aa729804\") " pod="calico-system/csi-node-driver-mbm2f" Feb 12 20:24:41.701443 kubelet[2315]: I0212 20:24:41.701314 2315 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/025e7f47-e5fc-44a0-ae5d-89a7aa729804-kubelet-dir\") pod \"csi-node-driver-mbm2f\" (UID: \"025e7f47-e5fc-44a0-ae5d-89a7aa729804\") " pod="calico-system/csi-node-driver-mbm2f" Feb 12 20:24:41.701443 kubelet[2315]: I0212 20:24:41.701377 2315 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8034d7ff-596e-4754-add6-fbf065068070-xtables-lock\") pod \"calico-node-68npm\" (UID: \"8034d7ff-596e-4754-add6-fbf065068070\") " pod="calico-system/calico-node-68npm" Feb 12 20:24:41.701443 kubelet[2315]: I0212 20:24:41.701421 2315 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8034d7ff-596e-4754-add6-fbf065068070-policysync\") pod \"calico-node-68npm\" (UID: \"8034d7ff-596e-4754-add6-fbf065068070\") " pod="calico-system/calico-node-68npm" Feb 12 20:24:41.701646 kubelet[2315]: I0212 20:24:41.701487 2315 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8034d7ff-596e-4754-add6-fbf065068070-cni-net-dir\") pod \"calico-node-68npm\" (UID: \"8034d7ff-596e-4754-add6-fbf065068070\") " pod="calico-system/calico-node-68npm" Feb 12 20:24:41.701646 kubelet[2315]: I0212 20:24:41.701536 2315 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/025e7f47-e5fc-44a0-ae5d-89a7aa729804-socket-dir\") pod \"csi-node-driver-mbm2f\" (UID: \"025e7f47-e5fc-44a0-ae5d-89a7aa729804\") " pod="calico-system/csi-node-driver-mbm2f" Feb 12 20:24:41.701646 kubelet[2315]: I0212 20:24:41.701579 2315 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5385bd64-b40f-42ae-8b72-a6b55585b7bb-kube-proxy\") pod \"kube-proxy-qnwbw\" (UID: \"5385bd64-b40f-42ae-8b72-a6b55585b7bb\") " pod="kube-system/kube-proxy-qnwbw" Feb 12 20:24:41.701646 kubelet[2315]: I0212 20:24:41.701638 2315 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5385bd64-b40f-42ae-8b72-a6b55585b7bb-lib-modules\") pod \"kube-proxy-qnwbw\" (UID: \"5385bd64-b40f-42ae-8b72-a6b55585b7bb\") " pod="kube-system/kube-proxy-qnwbw" Feb 12 20:24:41.701904 kubelet[2315]: I0212 20:24:41.701682 2315 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8034d7ff-596e-4754-add6-fbf065068070-var-run-calico\") pod \"calico-node-68npm\" (UID: \"8034d7ff-596e-4754-add6-fbf065068070\") " pod="calico-system/calico-node-68npm" Feb 12 20:24:41.701904 kubelet[2315]: I0212 20:24:41.701728 2315 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8034d7ff-596e-4754-add6-fbf065068070-var-lib-calico\") pod \"calico-node-68npm\" (UID: \"8034d7ff-596e-4754-add6-fbf065068070\") " pod="calico-system/calico-node-68npm" Feb 12 20:24:41.701904 kubelet[2315]: I0212 20:24:41.701774 2315 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8034d7ff-596e-4754-add6-fbf065068070-flexvol-driver-host\") pod \"calico-node-68npm\" (UID: \"8034d7ff-596e-4754-add6-fbf065068070\") " pod="calico-system/calico-node-68npm" Feb 12 20:24:41.701904 kubelet[2315]: I0212 20:24:41.701818 2315 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9vtr\" (UniqueName: \"kubernetes.io/projected/8034d7ff-596e-4754-add6-fbf065068070-kube-api-access-k9vtr\") pod \"calico-node-68npm\" (UID: \"8034d7ff-596e-4754-add6-fbf065068070\") " pod="calico-system/calico-node-68npm" Feb 12 20:24:41.701904 kubelet[2315]: I0212 20:24:41.701862 2315 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/025e7f47-e5fc-44a0-ae5d-89a7aa729804-registration-dir\") pod \"csi-node-driver-mbm2f\" (UID: \"025e7f47-e5fc-44a0-ae5d-89a7aa729804\") " pod="calico-system/csi-node-driver-mbm2f" Feb 12 20:24:41.702286 kubelet[2315]: I0212 20:24:41.701944 2315 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqthz\" (UniqueName: \"kubernetes.io/projected/5385bd64-b40f-42ae-8b72-a6b55585b7bb-kube-api-access-jqthz\") pod \"kube-proxy-qnwbw\" (UID: \"5385bd64-b40f-42ae-8b72-a6b55585b7bb\") " pod="kube-system/kube-proxy-qnwbw" Feb 12 20:24:41.702286 kubelet[2315]: I0212 20:24:41.702050 2315 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8034d7ff-596e-4754-add6-fbf065068070-lib-modules\") pod \"calico-node-68npm\" (UID: \"8034d7ff-596e-4754-add6-fbf065068070\") " pod="calico-system/calico-node-68npm" Feb 12 20:24:41.702286 kubelet[2315]: I0212 20:24:41.702111 2315 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8034d7ff-596e-4754-add6-fbf065068070-node-certs\") pod \"calico-node-68npm\" (UID: \"8034d7ff-596e-4754-add6-fbf065068070\") " pod="calico-system/calico-node-68npm" Feb 12 20:24:41.702286 kubelet[2315]: I0212 20:24:41.702155 2315 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5385bd64-b40f-42ae-8b72-a6b55585b7bb-xtables-lock\") pod \"kube-proxy-qnwbw\" (UID: \"5385bd64-b40f-42ae-8b72-a6b55585b7bb\") " pod="kube-system/kube-proxy-qnwbw" Feb 12 20:24:41.702286 kubelet[2315]: I0212 20:24:41.702209 2315 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8034d7ff-596e-4754-add6-fbf065068070-cni-log-dir\") pod \"calico-node-68npm\" (UID: \"8034d7ff-596e-4754-add6-fbf065068070\") " pod="calico-system/calico-node-68npm" Feb 12 20:24:41.702585 kubelet[2315]: I0212 20:24:41.702261 2315 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg6sd\" (UniqueName: \"kubernetes.io/projected/025e7f47-e5fc-44a0-ae5d-89a7aa729804-kube-api-access-bg6sd\") pod \"csi-node-driver-mbm2f\" (UID: \"025e7f47-e5fc-44a0-ae5d-89a7aa729804\") " pod="calico-system/csi-node-driver-mbm2f" Feb 12 20:24:41.766697 kubelet[2315]: I0212 20:24:41.766650 2315 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 20:24:41.803863 kubelet[2315]: I0212 20:24:41.803787 2315 reconciler.go:41] "Reconciler: start to sync state" Feb 12 20:24:41.808680 kubelet[2315]: E0212 20:24:41.808605 2315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:24:41.808680 kubelet[2315]: W0212 20:24:41.808657 2315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:24:41.808942 kubelet[2315]: E0212 20:24:41.808700 2315 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:24:41.809175 kubelet[2315]: E0212 20:24:41.809122 2315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:24:41.809175 kubelet[2315]: W0212 20:24:41.809165 2315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:24:41.809355 kubelet[2315]: E0212 20:24:41.809202 2315 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:24:41.809595 kubelet[2315]: E0212 20:24:41.809546 2315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:24:41.809595 kubelet[2315]: W0212 20:24:41.809582 2315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:24:41.809872 kubelet[2315]: E0212 20:24:41.809620 2315 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:24:41.810121 kubelet[2315]: E0212 20:24:41.810073 2315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:24:41.810121 kubelet[2315]: W0212 20:24:41.810115 2315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:24:41.810316 kubelet[2315]: E0212 20:24:41.810152 2315 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:24:41.820637 kubelet[2315]: E0212 20:24:41.820553 2315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:24:41.820637 kubelet[2315]: W0212 20:24:41.820599 2315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:24:41.820637 kubelet[2315]: E0212 20:24:41.820637 2315 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:24:41.906806 kubelet[2315]: E0212 20:24:41.906768 2315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:24:41.907103 kubelet[2315]: W0212 20:24:41.907067 2315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:24:41.907300 kubelet[2315]: E0212 20:24:41.907275 2315 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:24:41.907997 kubelet[2315]: E0212 20:24:41.907926 2315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:24:41.908278 kubelet[2315]: W0212 20:24:41.908236 2315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:24:41.911647 kubelet[2315]: E0212 20:24:41.908465 2315 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:24:41.911647 kubelet[2315]: E0212 20:24:41.909084 2315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:24:41.911647 kubelet[2315]: W0212 20:24:41.909114 2315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:24:41.911647 kubelet[2315]: E0212 20:24:41.909148 2315 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:24:42.009908 kubelet[2315]: E0212 20:24:42.009861 2315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:24:42.009908 kubelet[2315]: W0212 20:24:42.009901 2315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:24:42.010233 kubelet[2315]: E0212 20:24:42.009940 2315 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:24:42.010437 kubelet[2315]: E0212 20:24:42.010403 2315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:24:42.010543 kubelet[2315]: W0212 20:24:42.010438 2315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:24:42.010543 kubelet[2315]: E0212 20:24:42.010474 2315 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:24:42.010903 kubelet[2315]: E0212 20:24:42.010870 2315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:24:42.011067 kubelet[2315]: W0212 20:24:42.010902 2315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:24:42.011067 kubelet[2315]: E0212 20:24:42.010935 2315 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:24:42.082508 amazon-ssm-agent[1771]: 2024-02-12 20:24:42 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Feb 12 20:24:42.084663 kubelet[2315]: E0212 20:24:42.084621 2315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:24:42.084663 kubelet[2315]: W0212 20:24:42.084658 2315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:24:42.084907 kubelet[2315]: E0212 20:24:42.084698 2315 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:24:42.112120 kubelet[2315]: E0212 20:24:42.112083 2315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:24:42.112350 kubelet[2315]: W0212 20:24:42.112316 2315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:24:42.112550 kubelet[2315]: E0212 20:24:42.112523 2315 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:24:42.113184 kubelet[2315]: E0212 20:24:42.113152 2315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:24:42.113397 kubelet[2315]: W0212 20:24:42.113366 2315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:24:42.113566 kubelet[2315]: E0212 20:24:42.113542 2315 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:24:42.215483 kubelet[2315]: E0212 20:24:42.215435 2315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:24:42.215736 kubelet[2315]: W0212 20:24:42.215702 2315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:24:42.215918 kubelet[2315]: E0212 20:24:42.215894 2315 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:24:42.216596 kubelet[2315]: E0212 20:24:42.216563 2315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:24:42.216797 kubelet[2315]: W0212 20:24:42.216768 2315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:24:42.217020 kubelet[2315]: E0212 20:24:42.216951 2315 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:24:42.287658 env[1799]: time="2024-02-12T20:24:42.286819454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-68npm,Uid:8034d7ff-596e-4754-add6-fbf065068070,Namespace:calico-system,Attempt:0,}" Feb 12 20:24:42.291245 kubelet[2315]: E0212 20:24:42.287161 2315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:24:42.291245 kubelet[2315]: W0212 20:24:42.287187 2315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:24:42.291245 kubelet[2315]: E0212 20:24:42.287225 2315 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:24:42.317881 kubelet[2315]: E0212 20:24:42.317822 2315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:24:42.317881 kubelet[2315]: W0212 20:24:42.317864 2315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:24:42.318193 kubelet[2315]: E0212 20:24:42.317902 2315 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:24:42.419572 kubelet[2315]: E0212 20:24:42.419521 2315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:24:42.419572 kubelet[2315]: W0212 20:24:42.419558 2315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:24:42.419796 kubelet[2315]: E0212 20:24:42.419596 2315 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:24:42.488487 kubelet[2315]: E0212 20:24:42.488350 2315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:24:42.488487 kubelet[2315]: W0212 20:24:42.488388 2315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:24:42.488487 kubelet[2315]: E0212 20:24:42.488423 2315 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:24:42.583365 env[1799]: time="2024-02-12T20:24:42.583289406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qnwbw,Uid:5385bd64-b40f-42ae-8b72-a6b55585b7bb,Namespace:kube-system,Attempt:0,}" Feb 12 20:24:42.662441 kubelet[2315]: E0212 20:24:42.662258 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:42.840512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3322686751.mount: Deactivated successfully. Feb 12 20:24:42.852820 env[1799]: time="2024-02-12T20:24:42.852735048Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:42.860681 env[1799]: time="2024-02-12T20:24:42.860620085Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:42.862423 env[1799]: time="2024-02-12T20:24:42.862368107Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:42.867098 env[1799]: time="2024-02-12T20:24:42.867033664Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:42.869160 env[1799]: time="2024-02-12T20:24:42.869093292Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:42.881120 env[1799]: time="2024-02-12T20:24:42.881059916Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:42.888610 env[1799]: time="2024-02-12T20:24:42.888542371Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:42.890111 env[1799]: time="2024-02-12T20:24:42.890051175Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:42.932168 env[1799]: time="2024-02-12T20:24:42.930503162Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:24:42.932168 env[1799]: time="2024-02-12T20:24:42.930593658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:24:42.932168 env[1799]: time="2024-02-12T20:24:42.930619760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:24:42.932539 env[1799]: time="2024-02-12T20:24:42.932113659Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/96e32c107a5add4fb0d10fa776bbef151d439cb0755a6ec5e704496912e8a1e9 pid=2434 runtime=io.containerd.runc.v2 Feb 12 20:24:42.944455 env[1799]: time="2024-02-12T20:24:42.944285252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:24:42.944455 env[1799]: time="2024-02-12T20:24:42.944391900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:24:42.944775 env[1799]: time="2024-02-12T20:24:42.944633003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:24:42.948191 env[1799]: time="2024-02-12T20:24:42.948074277Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/96d5fc3221cd7dc0843d89cc6e21b2d0b5ce65f0ec18f5f2735572c9057de655 pid=2441 runtime=io.containerd.runc.v2 Feb 12 20:24:42.958591 kubelet[2315]: E0212 20:24:42.958096 2315 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mbm2f" podUID=025e7f47-e5fc-44a0-ae5d-89a7aa729804 Feb 12 20:24:43.057099 env[1799]: time="2024-02-12T20:24:43.057020219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-68npm,Uid:8034d7ff-596e-4754-add6-fbf065068070,Namespace:calico-system,Attempt:0,} returns sandbox id \"96e32c107a5add4fb0d10fa776bbef151d439cb0755a6ec5e704496912e8a1e9\"" Feb 12 20:24:43.060927 env[1799]: time="2024-02-12T20:24:43.060851644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\"" Feb 12 20:24:43.067373 env[1799]: time="2024-02-12T20:24:43.067316975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qnwbw,Uid:5385bd64-b40f-42ae-8b72-a6b55585b7bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"96d5fc3221cd7dc0843d89cc6e21b2d0b5ce65f0ec18f5f2735572c9057de655\"" Feb 12 20:24:43.663279 kubelet[2315]: E0212 20:24:43.663232 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:44.664073 kubelet[2315]: E0212 20:24:44.664006 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:44.693949 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2871043644.mount: Deactivated successfully. Feb 12 20:24:44.841064 env[1799]: time="2024-02-12T20:24:44.840951858Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:44.846114 env[1799]: time="2024-02-12T20:24:44.846046814Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbddd33ed55a4a5c129e8f09945d426860425b9778d9402efe7bcefea7990a57,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:44.848797 env[1799]: time="2024-02-12T20:24:44.848743729Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:44.851096 env[1799]: time="2024-02-12T20:24:44.851041087Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b05edbd1f80db4ada229e6001a666a7dd36bb6ab617143684fb3d28abfc4b71e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:44.852030 env[1799]: time="2024-02-12T20:24:44.851938974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\" returns image reference \"sha256:cbddd33ed55a4a5c129e8f09945d426860425b9778d9402efe7bcefea7990a57\"" Feb 12 20:24:44.853524 env[1799]: time="2024-02-12T20:24:44.853141602Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 12 20:24:44.856619 env[1799]: time="2024-02-12T20:24:44.856563692Z" level=info msg="CreateContainer within sandbox \"96e32c107a5add4fb0d10fa776bbef151d439cb0755a6ec5e704496912e8a1e9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 12 20:24:44.878484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount331770552.mount: Deactivated successfully. Feb 12 20:24:44.892778 env[1799]: time="2024-02-12T20:24:44.892702983Z" level=info msg="CreateContainer within sandbox \"96e32c107a5add4fb0d10fa776bbef151d439cb0755a6ec5e704496912e8a1e9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"85b12539d173c1a3be3dcdd79fda267732d992f25581ff339bbda27efe57aa3f\"" Feb 12 20:24:44.894544 env[1799]: time="2024-02-12T20:24:44.894485365Z" level=info msg="StartContainer for \"85b12539d173c1a3be3dcdd79fda267732d992f25581ff339bbda27efe57aa3f\"" Feb 12 20:24:44.963291 kubelet[2315]: E0212 20:24:44.963115 2315 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mbm2f" podUID=025e7f47-e5fc-44a0-ae5d-89a7aa729804 Feb 12 20:24:45.034687 env[1799]: time="2024-02-12T20:24:45.034610525Z" level=info msg="StartContainer for \"85b12539d173c1a3be3dcdd79fda267732d992f25581ff339bbda27efe57aa3f\" returns successfully" Feb 12 20:24:45.292306 env[1799]: time="2024-02-12T20:24:45.292230079Z" level=info msg="shim disconnected" id=85b12539d173c1a3be3dcdd79fda267732d992f25581ff339bbda27efe57aa3f Feb 12 20:24:45.292586 env[1799]: time="2024-02-12T20:24:45.292301494Z" level=warning msg="cleaning up after shim disconnected" id=85b12539d173c1a3be3dcdd79fda267732d992f25581ff339bbda27efe57aa3f namespace=k8s.io Feb 12 20:24:45.292586 env[1799]: time="2024-02-12T20:24:45.292358556Z" level=info msg="cleaning up dead shim" Feb 12 20:24:45.307471 env[1799]: time="2024-02-12T20:24:45.307385331Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:24:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2553 runtime=io.containerd.runc.v2\n" Feb 12 20:24:45.665209 kubelet[2315]: E0212 20:24:45.665016 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:45.874533 systemd[1]: run-containerd-runc-k8s.io-85b12539d173c1a3be3dcdd79fda267732d992f25581ff339bbda27efe57aa3f-runc.QGqnc3.mount: Deactivated successfully. Feb 12 20:24:45.874829 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-85b12539d173c1a3be3dcdd79fda267732d992f25581ff339bbda27efe57aa3f-rootfs.mount: Deactivated successfully. Feb 12 20:24:46.317691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4256044629.mount: Deactivated successfully. Feb 12 20:24:46.666331 kubelet[2315]: E0212 20:24:46.666170 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:46.960658 kubelet[2315]: E0212 20:24:46.960505 2315 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mbm2f" podUID=025e7f47-e5fc-44a0-ae5d-89a7aa729804 Feb 12 20:24:47.006606 env[1799]: time="2024-02-12T20:24:47.006546482Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:47.010014 env[1799]: time="2024-02-12T20:24:47.009913996Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:47.013272 env[1799]: time="2024-02-12T20:24:47.013174273Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:47.017681 env[1799]: time="2024-02-12T20:24:47.017618053Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:47.018201 env[1799]: time="2024-02-12T20:24:47.018148927Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926\"" Feb 12 20:24:47.020906 env[1799]: time="2024-02-12T20:24:47.020849374Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\"" Feb 12 20:24:47.025667 env[1799]: time="2024-02-12T20:24:47.025594460Z" level=info msg="CreateContainer within sandbox \"96d5fc3221cd7dc0843d89cc6e21b2d0b5ce65f0ec18f5f2735572c9057de655\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 20:24:47.052038 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3005660243.mount: Deactivated successfully. Feb 12 20:24:47.065847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1350634094.mount: Deactivated successfully. Feb 12 20:24:47.072592 env[1799]: time="2024-02-12T20:24:47.072522942Z" level=info msg="CreateContainer within sandbox \"96d5fc3221cd7dc0843d89cc6e21b2d0b5ce65f0ec18f5f2735572c9057de655\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4370799975e3434569cac5a3b0f5d774fb45368dbf4b6588992a919ee9ca227a\"" Feb 12 20:24:47.073808 env[1799]: time="2024-02-12T20:24:47.073749262Z" level=info msg="StartContainer for \"4370799975e3434569cac5a3b0f5d774fb45368dbf4b6588992a919ee9ca227a\"" Feb 12 20:24:47.186864 env[1799]: time="2024-02-12T20:24:47.186424384Z" level=info msg="StartContainer for \"4370799975e3434569cac5a3b0f5d774fb45368dbf4b6588992a919ee9ca227a\" returns successfully" Feb 12 20:24:47.280000 audit[2628]: NETFILTER_CFG table=mangle:35 family=2 entries=1 op=nft_register_chain pid=2628 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:24:47.280000 audit[2628]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe35cfdb0 a2=0 a3=ffff8c3e36c0 items=0 ppid=2591 pid=2628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:47.300307 kernel: audit: type=1325 audit(1707769487.280:193): table=mangle:35 family=2 entries=1 op=nft_register_chain pid=2628 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:24:47.300517 kernel: audit: type=1300 audit(1707769487.280:193): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe35cfdb0 a2=0 a3=ffff8c3e36c0 items=0 ppid=2591 pid=2628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:47.280000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 12 20:24:47.306402 kernel: audit: type=1327 audit(1707769487.280:193): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 12 20:24:47.306578 kernel: audit: type=1325 audit(1707769487.286:194): table=mangle:36 family=10 entries=1 op=nft_register_chain pid=2629 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:24:47.286000 audit[2629]: NETFILTER_CFG table=mangle:36 family=10 entries=1 op=nft_register_chain pid=2629 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:24:47.286000 audit[2629]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff94157c0 a2=0 a3=ffffb70066c0 items=0 ppid=2591 pid=2629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:47.324251 kernel: audit: type=1300 audit(1707769487.286:194): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff94157c0 a2=0 a3=ffffb70066c0 items=0 ppid=2591 pid=2629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:47.286000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 12 20:24:47.330941 kernel: audit: type=1327 audit(1707769487.286:194): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 12 20:24:47.294000 audit[2630]: NETFILTER_CFG table=nat:37 family=10 entries=1 op=nft_register_chain pid=2630 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:24:47.294000 audit[2630]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd474c340 a2=0 a3=ffff9cd546c0 items=0 ppid=2591 pid=2630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:47.337993 kernel: audit: type=1325 audit(1707769487.294:195): table=nat:37 family=10 entries=1 op=nft_register_chain pid=2630 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:24:47.294000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 12 20:24:47.356254 kernel: audit: type=1300 audit(1707769487.294:195): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd474c340 a2=0 a3=ffff9cd546c0 items=0 ppid=2591 pid=2630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:47.356404 kernel: audit: type=1327 audit(1707769487.294:195): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 12 20:24:47.296000 audit[2631]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_chain pid=2631 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:24:47.362163 kernel: audit: type=1325 audit(1707769487.296:196): table=nat:38 family=2 entries=1 op=nft_register_chain pid=2631 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:24:47.296000 audit[2631]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff10d3860 a2=0 a3=ffffb871a6c0 items=0 ppid=2591 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:47.296000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 12 20:24:47.298000 audit[2632]: NETFILTER_CFG table=filter:39 family=10 entries=1 op=nft_register_chain pid=2632 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:24:47.298000 audit[2632]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe76697c0 a2=0 a3=ffffa629f6c0 items=0 ppid=2591 pid=2632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:47.298000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 12 20:24:47.301000 audit[2633]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_chain pid=2633 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:24:47.301000 audit[2633]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc9e09070 a2=0 a3=ffffa83206c0 items=0 ppid=2591 pid=2633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:47.301000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 12 20:24:47.383000 audit[2634]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=2634 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:24:47.383000 audit[2634]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffff8804010 a2=0 a3=ffffbaf8f6c0 items=0 ppid=2591 pid=2634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:47.383000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 12 20:24:47.388000 audit[2636]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=2636 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:24:47.388000 audit[2636]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffff2667760 a2=0 a3=ffff8fff76c0 items=0 ppid=2591 pid=2636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:47.388000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Feb 12 20:24:47.397000 audit[2639]: NETFILTER_CFG table=filter:43 family=2 entries=2 op=nft_register_chain pid=2639 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:24:47.397000 audit[2639]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffe7e9c650 a2=0 a3=ffffb6fff6c0 items=0 ppid=2591 pid=2639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:47.397000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Feb 12 20:24:47.399000 audit[2640]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2640 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:24:47.399000 audit[2640]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff3bac880 a2=0 a3=ffffa921d6c0 items=0 ppid=2591 pid=2640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:47.399000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 12 20:24:47.404000 audit[2642]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2642 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:24:47.404000 audit[2642]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff7691170 a2=0 a3=ffffac7786c0 items=0 ppid=2591 pid=2642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:47.404000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 12 20:24:47.407000 audit[2643]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=2643 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:24:47.407000 audit[2643]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffc77a590 a2=0 a3=ffff809d66c0 items=0 ppid=2591 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:47.407000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 12 20:24:47.413000 audit[2645]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_rule pid=2645 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:24:47.413000 audit[2645]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffdf22b8f0 a2=0 a3=ffffa69556c0 items=0 ppid=2591 pid=2645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:47.413000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 12 20:24:47.421000 audit[2648]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2648 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:24:47.421000 audit[2648]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc6c3b360 a2=0 a3=ffff802c06c0 items=0 ppid=2591 pid=2648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:47.421000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Feb 12 20:24:47.423000 audit[2649]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2649 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:24:47.423000 audit[2649]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc7313480 a2=0 a3=ffff82a106c0 items=0 ppid=2591 pid=2649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:47.423000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 12 20:24:47.428000 audit[2651]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2651 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:24:47.428000 audit[2651]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffde7a9710 a2=0 a3=ffff7fc786c0 items=0 ppid=2591 pid=2651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:47.428000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 12 20:24:47.431000 audit[2652]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2652 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:24:47.431000 audit[2652]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd30a4940 a2=0 a3=ffff879ba6c0 items=0 ppid=2591 pid=2652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:47.431000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 12 20:24:47.438000 audit[2654]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_rule pid=2654 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:24:47.438000 audit[2654]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffeedc2250 a2=0 a3=ffffbe3f36c0 items=0 ppid=2591 pid=2654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:47.438000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 12 20:24:47.445000 audit[2657]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2657 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:24:47.445000 audit[2657]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffefd41870 a2=0 a3=ffffaa99f6c0 items=0 ppid=2591 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:47.445000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 12 20:24:47.454000 audit[2660]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=2660 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:24:47.454000 audit[2660]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffeb0f7ac0 a2=0 a3=ffff97f5e6c0 items=0 ppid=2591 pid=2660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:47.454000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 12 20:24:47.456000 audit[2661]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=2661 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:24:47.456000 audit[2661]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe7fc6380 a2=0 a3=ffff9f6fb6c0 items=0 ppid=2591 pid=2661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:47.456000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 12 20:24:47.461000 audit[2663]: NETFILTER_CFG table=nat:56 family=2 entries=2 op=nft_register_chain pid=2663 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:24:47.461000 audit[2663]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffdac34880 a2=0 a3=ffff8dc796c0 items=0 ppid=2591 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:47.461000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 12 20:24:47.468000 audit[2666]: NETFILTER_CFG table=nat:57 family=2 entries=2 op=nft_register_chain pid=2666 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:24:47.468000 audit[2666]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffe3a1f4d0 a2=0 a3=ffff8921f6c0 items=0 ppid=2591 pid=2666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:47.468000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 12 20:24:47.480000 audit[2670]: NETFILTER_CFG table=filter:58 family=2 entries=3 op=nft_register_rule pid=2670 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:24:47.480000 audit[2670]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffd7779630 a2=0 a3=ffffa5eb26c0 items=0 ppid=2591 pid=2670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:47.480000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:24:47.507000 audit[2670]: NETFILTER_CFG table=nat:59 family=2 entries=57 op=nft_register_chain pid=2670 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:24:47.507000 audit[2670]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffd7779630 a2=0 a3=ffffa5eb26c0 items=0 ppid=2591 pid=2670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:47.507000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:24:47.530000 audit[2677]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_chain pid=2677 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:24:47.530000 audit[2677]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffff2be9e0 a2=0 a3=ffffab1006c0 items=0 ppid=2591 pid=2677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:47.530000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 12 20:24:47.537000 audit[2679]: NETFILTER_CFG table=filter:61 family=10 entries=2 op=nft_register_chain pid=2679 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:24:47.537000 audit[2679]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffffda3570 a2=0 a3=ffffba1bc6c0 items=0 ppid=2591 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:47.537000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Feb 12 20:24:47.546000 audit[2682]: NETFILTER_CFG table=filter:62 family=10 entries=2 op=nft_register_chain pid=2682 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:24:47.546000 audit[2682]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffef8910a0 a2=0 a3=ffffba4936c0 items=0 ppid=2591 pid=2682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:47.546000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Feb 12 20:24:47.550000 audit[2683]: NETFILTER_CFG table=filter:63 family=10 entries=1 op=nft_register_chain pid=2683 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:24:47.550000 audit[2683]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd34d7780 a2=0 a3=ffffa653a6c0 items=0 ppid=2591 pid=2683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:47.550000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 12 20:24:47.555000 audit[2685]: NETFILTER_CFG table=filter:64 family=10 entries=1 op=nft_register_rule pid=2685 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:24:47.555000 audit[2685]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffff6ce1c0 a2=0 a3=ffff99a636c0 items=0 ppid=2591 pid=2685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:47.555000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 12 20:24:47.558000 audit[2686]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2686 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:24:47.558000 audit[2686]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcd6bd0d0 a2=0 a3=ffff898b76c0 items=0 ppid=2591 pid=2686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:47.558000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 12 20:24:47.564000 audit[2688]: NETFILTER_CFG table=filter:66 family=10 entries=1 op=nft_register_rule pid=2688 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:24:47.564000 audit[2688]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffeb80f990 a2=0 a3=ffffb1adb6c0 items=0 ppid=2591 pid=2688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:47.564000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Feb 12 20:24:47.572000 audit[2691]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2691 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:24:47.572000 audit[2691]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffdeef9860 a2=0 a3=ffff8c67c6c0 items=0 ppid=2591 pid=2691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:47.572000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 12 20:24:47.575000 audit[2692]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2692 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:24:47.575000 audit[2692]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc2457100 a2=0 a3=ffffa39856c0 items=0 ppid=2591 pid=2692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:47.575000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 12 20:24:47.580000 audit[2694]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2694 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:24:47.580000 audit[2694]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffdfa3a320 a2=0 a3=ffffbd5cb6c0 items=0 ppid=2591 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:47.580000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 12 20:24:47.582000 audit[2695]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2695 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:24:47.582000 audit[2695]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd4147ea0 a2=0 a3=ffff9b3c86c0 items=0 ppid=2591 pid=2695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:47.582000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 12 20:24:47.588000 audit[2697]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2697 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:24:47.588000 audit[2697]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff0641af0 a2=0 a3=ffffb76d96c0 items=0 ppid=2591 pid=2697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:47.588000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 12 20:24:47.598000 audit[2700]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_rule pid=2700 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:24:47.598000 audit[2700]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcabc08a0 a2=0 a3=ffffac54d6c0 items=0 ppid=2591 pid=2700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:47.598000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 12 20:24:47.606000 audit[2703]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_rule pid=2703 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:24:47.606000 audit[2703]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffde324df0 a2=0 a3=ffff971fc6c0 items=0 ppid=2591 pid=2703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:47.606000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Feb 12 20:24:47.609000 audit[2704]: NETFILTER_CFG table=nat:74 family=10 entries=1 op=nft_register_chain pid=2704 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:24:47.609000 audit[2704]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffef8eb1d0 a2=0 a3=ffffbccdc6c0 items=0 ppid=2591 pid=2704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:47.609000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 12 20:24:47.615000 audit[2706]: NETFILTER_CFG table=nat:75 family=10 entries=2 op=nft_register_chain pid=2706 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:24:47.615000 audit[2706]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffd3516070 a2=0 a3=ffffba0e66c0 items=0 ppid=2591 pid=2706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:47.615000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 12 20:24:47.622000 audit[2709]: NETFILTER_CFG table=nat:76 family=10 entries=2 op=nft_register_chain pid=2709 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:24:47.622000 audit[2709]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffda11b4b0 a2=0 a3=ffff9ae396c0 items=0 ppid=2591 pid=2709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:47.622000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 12 20:24:47.633000 audit[2713]: NETFILTER_CFG table=filter:77 family=10 entries=3 op=nft_register_rule pid=2713 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 12 20:24:47.633000 audit[2713]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=fffffcbe14a0 a2=0 a3=ffffaffdd6c0 items=0 ppid=2591 pid=2713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:47.633000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:24:47.635000 audit[2713]: NETFILTER_CFG table=nat:78 family=10 entries=10 op=nft_register_chain pid=2713 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 12 20:24:47.635000 audit[2713]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1968 a0=3 a1=fffffcbe14a0 a2=0 a3=ffffaffdd6c0 items=0 ppid=2591 pid=2713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:47.635000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:24:47.666692 kubelet[2315]: E0212 20:24:47.666626 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:48.074927 kubelet[2315]: I0212 20:24:48.074450 2315 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-qnwbw" podStartSLOduration=-9.223372021780388e+09 pod.CreationTimestamp="2024-02-12 20:24:33 +0000 UTC" firstStartedPulling="2024-02-12 20:24:43.070547991 +0000 UTC m=+23.379654932" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:24:48.073906514 +0000 UTC m=+28.383013455" watchObservedRunningTime="2024-02-12 20:24:48.074388653 +0000 UTC m=+28.383495594" Feb 12 20:24:48.242066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount713303945.mount: Deactivated successfully. Feb 12 20:24:48.668020 kubelet[2315]: E0212 20:24:48.667803 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:48.959278 kubelet[2315]: E0212 20:24:48.958513 2315 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mbm2f" podUID=025e7f47-e5fc-44a0-ae5d-89a7aa729804 Feb 12 20:24:49.669045 kubelet[2315]: E0212 20:24:49.668944 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:50.669459 kubelet[2315]: E0212 20:24:50.669342 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:50.960654 kubelet[2315]: E0212 20:24:50.957929 2315 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mbm2f" podUID=025e7f47-e5fc-44a0-ae5d-89a7aa729804 Feb 12 20:24:51.670659 kubelet[2315]: E0212 20:24:51.670572 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:52.008520 env[1799]: time="2024-02-12T20:24:52.008460458Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:52.010870 env[1799]: time="2024-02-12T20:24:52.010819881Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9c9318f5fbf505fc3d84676966009a3887e58ea1e3eac10039e5a96dfceb254b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:52.013652 env[1799]: time="2024-02-12T20:24:52.013589042Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:52.016086 env[1799]: time="2024-02-12T20:24:52.016034004Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:d943b4c23e82a39b0186a1a3b2fe8f728e543d503df72d7be521501a82b7e7b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:52.017021 env[1799]: time="2024-02-12T20:24:52.016953681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\" returns image reference \"sha256:9c9318f5fbf505fc3d84676966009a3887e58ea1e3eac10039e5a96dfceb254b\"" Feb 12 20:24:52.020400 env[1799]: time="2024-02-12T20:24:52.020345921Z" level=info msg="CreateContainer within sandbox \"96e32c107a5add4fb0d10fa776bbef151d439cb0755a6ec5e704496912e8a1e9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 12 20:24:52.040908 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4054553378.mount: Deactivated successfully. Feb 12 20:24:52.049322 env[1799]: time="2024-02-12T20:24:52.049254192Z" level=info msg="CreateContainer within sandbox \"96e32c107a5add4fb0d10fa776bbef151d439cb0755a6ec5e704496912e8a1e9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d2bc2565412c68dfd980c10eece6882f5a3dc8863887b19fb2d7d3ebef5b469c\"" Feb 12 20:24:52.050452 env[1799]: time="2024-02-12T20:24:52.050386010Z" level=info msg="StartContainer for \"d2bc2565412c68dfd980c10eece6882f5a3dc8863887b19fb2d7d3ebef5b469c\"" Feb 12 20:24:52.173772 env[1799]: time="2024-02-12T20:24:52.173693875Z" level=info msg="StartContainer for \"d2bc2565412c68dfd980c10eece6882f5a3dc8863887b19fb2d7d3ebef5b469c\" returns successfully" Feb 12 20:24:52.671374 kubelet[2315]: E0212 20:24:52.671307 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:52.958633 kubelet[2315]: E0212 20:24:52.958507 2315 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mbm2f" podUID=025e7f47-e5fc-44a0-ae5d-89a7aa729804 Feb 12 20:24:53.403899 env[1799]: time="2024-02-12T20:24:53.403788513Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 20:24:53.441495 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d2bc2565412c68dfd980c10eece6882f5a3dc8863887b19fb2d7d3ebef5b469c-rootfs.mount: Deactivated successfully. Feb 12 20:24:53.508025 kubelet[2315]: I0212 20:24:53.507948 2315 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 20:24:53.672500 kubelet[2315]: E0212 20:24:53.672336 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:54.672944 kubelet[2315]: E0212 20:24:54.672875 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:54.859646 env[1799]: time="2024-02-12T20:24:54.859575523Z" level=info msg="shim disconnected" id=d2bc2565412c68dfd980c10eece6882f5a3dc8863887b19fb2d7d3ebef5b469c Feb 12 20:24:54.860463 env[1799]: time="2024-02-12T20:24:54.859648724Z" level=warning msg="cleaning up after shim disconnected" id=d2bc2565412c68dfd980c10eece6882f5a3dc8863887b19fb2d7d3ebef5b469c namespace=k8s.io Feb 12 20:24:54.860463 env[1799]: time="2024-02-12T20:24:54.859672689Z" level=info msg="cleaning up dead shim" Feb 12 20:24:54.874720 env[1799]: time="2024-02-12T20:24:54.874648662Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:24:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2767 runtime=io.containerd.runc.v2\n" Feb 12 20:24:54.966668 env[1799]: time="2024-02-12T20:24:54.966598616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mbm2f,Uid:025e7f47-e5fc-44a0-ae5d-89a7aa729804,Namespace:calico-system,Attempt:0,}" Feb 12 20:24:55.081641 env[1799]: time="2024-02-12T20:24:55.081539585Z" level=error msg="Failed to destroy network for sandbox \"8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 20:24:55.085110 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2-shm.mount: Deactivated successfully. Feb 12 20:24:55.086574 env[1799]: time="2024-02-12T20:24:55.086492705Z" level=error msg="encountered an error cleaning up failed sandbox \"8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 20:24:55.086918 env[1799]: time="2024-02-12T20:24:55.086856804Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mbm2f,Uid:025e7f47-e5fc-44a0-ae5d-89a7aa729804,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 20:24:55.087813 kubelet[2315]: E0212 20:24:55.087746 2315 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 20:24:55.088078 kubelet[2315]: E0212 20:24:55.087898 2315 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mbm2f" Feb 12 20:24:55.088078 kubelet[2315]: E0212 20:24:55.087946 2315 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mbm2f" Feb 12 20:24:55.088250 kubelet[2315]: E0212 20:24:55.088132 2315 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mbm2f_calico-system(025e7f47-e5fc-44a0-ae5d-89a7aa729804)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mbm2f_calico-system(025e7f47-e5fc-44a0-ae5d-89a7aa729804)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mbm2f" podUID=025e7f47-e5fc-44a0-ae5d-89a7aa729804 Feb 12 20:24:55.090925 kubelet[2315]: I0212 20:24:55.090885 2315 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2" Feb 12 20:24:55.092265 env[1799]: time="2024-02-12T20:24:55.092210972Z" level=info msg="StopPodSandbox for \"8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2\"" Feb 12 20:24:55.099992 env[1799]: time="2024-02-12T20:24:55.099912709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\"" Feb 12 20:24:55.150325 env[1799]: time="2024-02-12T20:24:55.150244215Z" level=error msg="StopPodSandbox for \"8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2\" failed" error="failed to destroy network for sandbox \"8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 20:24:55.150996 kubelet[2315]: E0212 20:24:55.150903 2315 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2" Feb 12 20:24:55.151166 kubelet[2315]: E0212 20:24:55.151070 2315 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2} Feb 12 20:24:55.151255 kubelet[2315]: E0212 20:24:55.151183 2315 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"025e7f47-e5fc-44a0-ae5d-89a7aa729804\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 12 20:24:55.151391 kubelet[2315]: E0212 20:24:55.151249 2315 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"025e7f47-e5fc-44a0-ae5d-89a7aa729804\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mbm2f" podUID=025e7f47-e5fc-44a0-ae5d-89a7aa729804 Feb 12 20:24:55.673309 kubelet[2315]: E0212 20:24:55.673241 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:55.917091 update_engine[1789]: I0212 20:24:55.917023 1789 update_attempter.cc:509] Updating boot flags... Feb 12 20:24:56.674417 kubelet[2315]: E0212 20:24:56.674339 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:57.674907 kubelet[2315]: E0212 20:24:57.674836 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:58.675787 kubelet[2315]: E0212 20:24:58.675717 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:24:59.024999 kubelet[2315]: I0212 20:24:59.024919 2315 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:24:59.137648 kubelet[2315]: I0212 20:24:59.137604 2315 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vws5\" (UniqueName: \"kubernetes.io/projected/195a7bd2-b313-41cb-9a9a-4c42180b67a9-kube-api-access-9vws5\") pod \"nginx-deployment-8ffc5cf85-6tgdf\" (UID: \"195a7bd2-b313-41cb-9a9a-4c42180b67a9\") " pod="default/nginx-deployment-8ffc5cf85-6tgdf" Feb 12 20:24:59.338729 env[1799]: time="2024-02-12T20:24:59.338210918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-6tgdf,Uid:195a7bd2-b313-41cb-9a9a-4c42180b67a9,Namespace:default,Attempt:0,}" Feb 12 20:24:59.490620 env[1799]: time="2024-02-12T20:24:59.490517112Z" level=error msg="Failed to destroy network for sandbox \"11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 20:24:59.494090 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8-shm.mount: Deactivated successfully. Feb 12 20:24:59.499739 env[1799]: time="2024-02-12T20:24:59.499632820Z" level=error msg="encountered an error cleaning up failed sandbox \"11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 20:24:59.499941 env[1799]: time="2024-02-12T20:24:59.499745250Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-6tgdf,Uid:195a7bd2-b313-41cb-9a9a-4c42180b67a9,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 20:24:59.500856 kubelet[2315]: E0212 20:24:59.500307 2315 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 20:24:59.500856 kubelet[2315]: E0212 20:24:59.500383 2315 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8ffc5cf85-6tgdf" Feb 12 20:24:59.500856 kubelet[2315]: E0212 20:24:59.500425 2315 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8ffc5cf85-6tgdf" Feb 12 20:24:59.501230 kubelet[2315]: E0212 20:24:59.500526 2315 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8ffc5cf85-6tgdf_default(195a7bd2-b313-41cb-9a9a-4c42180b67a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8ffc5cf85-6tgdf_default(195a7bd2-b313-41cb-9a9a-4c42180b67a9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-6tgdf" podUID=195a7bd2-b313-41cb-9a9a-4c42180b67a9 Feb 12 20:24:59.677709 kubelet[2315]: E0212 20:24:59.676829 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:00.121106 kubelet[2315]: I0212 20:25:00.120184 2315 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8" Feb 12 20:25:00.121664 env[1799]: time="2024-02-12T20:25:00.121583575Z" level=info msg="StopPodSandbox for \"11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8\"" Feb 12 20:25:00.193299 env[1799]: time="2024-02-12T20:25:00.193196727Z" level=error msg="StopPodSandbox for \"11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8\" failed" error="failed to destroy network for sandbox \"11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 20:25:00.193589 kubelet[2315]: E0212 20:25:00.193546 2315 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8" Feb 12 20:25:00.193717 kubelet[2315]: E0212 20:25:00.193618 2315 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8} Feb 12 20:25:00.193717 kubelet[2315]: E0212 20:25:00.193681 2315 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"195a7bd2-b313-41cb-9a9a-4c42180b67a9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 12 20:25:00.193907 kubelet[2315]: E0212 20:25:00.193738 2315 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"195a7bd2-b313-41cb-9a9a-4c42180b67a9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-6tgdf" podUID=195a7bd2-b313-41cb-9a9a-4c42180b67a9 Feb 12 20:25:00.644014 kubelet[2315]: E0212 20:25:00.643928 2315 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:00.677469 kubelet[2315]: E0212 20:25:00.677383 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:01.677724 kubelet[2315]: E0212 20:25:01.677641 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:02.678755 kubelet[2315]: E0212 20:25:02.678691 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:03.679305 kubelet[2315]: E0212 20:25:03.679201 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:03.922299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount62942447.mount: Deactivated successfully. Feb 12 20:25:04.018299 env[1799]: time="2024-02-12T20:25:04.018240837Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:04.021573 env[1799]: time="2024-02-12T20:25:04.021523148Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c445639cb28807ced09724016dc3b273b170b14d3b3d0c39b1affa1cc6b68774,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:04.025049 env[1799]: time="2024-02-12T20:25:04.024989494Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:04.028788 env[1799]: time="2024-02-12T20:25:04.028726526Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:a45dffb21a0e9ca8962f36359a2ab776beeecd93843543c2fa1745d7bbb0f754,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:04.030655 env[1799]: time="2024-02-12T20:25:04.030604835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\" returns image reference \"sha256:c445639cb28807ced09724016dc3b273b170b14d3b3d0c39b1affa1cc6b68774\"" Feb 12 20:25:04.055560 env[1799]: time="2024-02-12T20:25:04.055477432Z" level=info msg="CreateContainer within sandbox \"96e32c107a5add4fb0d10fa776bbef151d439cb0755a6ec5e704496912e8a1e9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 12 20:25:04.082372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2464997566.mount: Deactivated successfully. Feb 12 20:25:04.090348 env[1799]: time="2024-02-12T20:25:04.090270069Z" level=info msg="CreateContainer within sandbox \"96e32c107a5add4fb0d10fa776bbef151d439cb0755a6ec5e704496912e8a1e9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4ca38e1b8ddf9f7d8593395fd22139e13e7ed1b42dfe64e98d32e21e90380184\"" Feb 12 20:25:04.091544 env[1799]: time="2024-02-12T20:25:04.091498114Z" level=info msg="StartContainer for \"4ca38e1b8ddf9f7d8593395fd22139e13e7ed1b42dfe64e98d32e21e90380184\"" Feb 12 20:25:04.202201 env[1799]: time="2024-02-12T20:25:04.202136960Z" level=info msg="StartContainer for \"4ca38e1b8ddf9f7d8593395fd22139e13e7ed1b42dfe64e98d32e21e90380184\" returns successfully" Feb 12 20:25:04.317981 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 12 20:25:04.318157 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 12 20:25:04.679751 kubelet[2315]: E0212 20:25:04.679664 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:05.680235 kubelet[2315]: E0212 20:25:05.680190 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:05.808000 audit[3210]: AVC avc: denied { write } for pid=3210 comm="tee" name="fd" dev="proc" ino=15126 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 20:25:05.811602 kernel: kauditd_printk_skb: 122 callbacks suppressed Feb 12 20:25:05.811735 kernel: audit: type=1400 audit(1707769505.808:238): avc: denied { write } for pid=3210 comm="tee" name="fd" dev="proc" ino=15126 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 20:25:05.808000 audit[3210]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd7d27986 a2=241 a3=1b6 items=1 ppid=3172 pid=3210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:05.836597 kernel: audit: type=1300 audit(1707769505.808:238): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd7d27986 a2=241 a3=1b6 items=1 ppid=3172 pid=3210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:05.808000 audit: CWD cwd="/etc/service/enabled/cni/log" Feb 12 20:25:05.852233 kernel: audit: type=1307 audit(1707769505.808:238): cwd="/etc/service/enabled/cni/log" Feb 12 20:25:05.808000 audit: PATH item=0 name="/dev/fd/63" inode=15114 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:05.865127 kernel: audit: type=1302 audit(1707769505.808:238): item=0 name="/dev/fd/63" inode=15114 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:05.808000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 20:25:05.878905 kernel: audit: type=1327 audit(1707769505.808:238): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 20:25:05.800000 audit[3212]: AVC avc: denied { write } for pid=3212 comm="tee" name="fd" dev="proc" ino=15120 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 20:25:05.895974 kernel: audit: type=1400 audit(1707769505.800:237): avc: denied { write } for pid=3212 comm="tee" name="fd" dev="proc" ino=15120 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 20:25:05.800000 audit[3212]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff51cb984 a2=241 a3=1b6 items=1 ppid=3168 pid=3212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:05.913760 kernel: audit: type=1300 audit(1707769505.800:237): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff51cb984 a2=241 a3=1b6 items=1 ppid=3168 pid=3212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:05.913927 kernel: audit: type=1307 audit(1707769505.800:237): cwd="/etc/service/enabled/bird6/log" Feb 12 20:25:05.800000 audit: CWD cwd="/etc/service/enabled/bird6/log" Feb 12 20:25:05.800000 audit: PATH item=0 name="/dev/fd/63" inode=15115 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:05.925165 kernel: audit: type=1302 audit(1707769505.800:237): item=0 name="/dev/fd/63" inode=15115 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:05.800000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 20:25:05.936345 kernel: audit: type=1327 audit(1707769505.800:237): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 20:25:05.824000 audit[3219]: AVC avc: denied { write } for pid=3219 comm="tee" name="fd" dev="proc" ino=15134 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 20:25:05.824000 audit[3219]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff0ea2984 a2=241 a3=1b6 items=1 ppid=3166 pid=3219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:05.824000 audit: CWD cwd="/etc/service/enabled/confd/log" Feb 12 20:25:05.824000 audit: PATH item=0 name="/dev/fd/63" inode=16217 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:05.824000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 20:25:05.829000 audit[3227]: AVC avc: denied { write } for pid=3227 comm="tee" name="fd" dev="proc" ino=15138 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 20:25:05.829000 audit[3227]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffdd65b975 a2=241 a3=1b6 items=1 ppid=3184 pid=3227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:05.829000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Feb 12 20:25:05.829000 audit: PATH item=0 name="/dev/fd/63" inode=15131 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:05.829000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 20:25:05.840000 audit[3221]: AVC avc: denied { write } for pid=3221 comm="tee" name="fd" dev="proc" ino=15142 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 20:25:05.840000 audit[3221]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff0c13974 a2=241 a3=1b6 items=1 ppid=3167 pid=3221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:05.840000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Feb 12 20:25:05.840000 audit: PATH item=0 name="/dev/fd/63" inode=16218 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:05.840000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 20:25:05.845000 audit[3224]: AVC avc: denied { write } for pid=3224 comm="tee" name="fd" dev="proc" ino=15146 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 20:25:05.845000 audit[3224]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe6eff985 a2=241 a3=1b6 items=1 ppid=3169 pid=3224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:05.845000 audit: CWD cwd="/etc/service/enabled/bird/log" Feb 12 20:25:05.845000 audit: PATH item=0 name="/dev/fd/63" inode=16221 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:05.845000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 20:25:05.865000 audit[3231]: AVC avc: denied { write } for pid=3231 comm="tee" name="fd" dev="proc" ino=15150 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 20:25:05.865000 audit[3231]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff9494984 a2=241 a3=1b6 items=1 ppid=3176 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:05.865000 audit: CWD cwd="/etc/service/enabled/felix/log" Feb 12 20:25:05.865000 audit: PATH item=0 name="/dev/fd/63" inode=16222 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:25:05.865000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 20:25:05.960103 env[1799]: time="2024-02-12T20:25:05.959283365Z" level=info msg="StopPodSandbox for \"8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2\"" Feb 12 20:25:06.131308 kubelet[2315]: I0212 20:25:06.130871 2315 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-68npm" podStartSLOduration=-9.223372003723965e+09 pod.CreationTimestamp="2024-02-12 20:24:33 +0000 UTC" firstStartedPulling="2024-02-12 20:24:43.05985367 +0000 UTC m=+23.368960599" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:25:05.170283167 +0000 UTC m=+45.479390132" watchObservedRunningTime="2024-02-12 20:25:06.130810517 +0000 UTC m=+46.439917482" Feb 12 20:25:06.288619 systemd[1]: run-containerd-runc-k8s.io-4ca38e1b8ddf9f7d8593395fd22139e13e7ed1b42dfe64e98d32e21e90380184-runc.dwSqgA.mount: Deactivated successfully. Feb 12 20:25:06.374089 kernel: Initializing XFRM netlink socket Feb 12 20:25:06.390101 env[1799]: 2024-02-12 20:25:06.129 [INFO][3247] k8s.go 578: Cleaning up netns ContainerID="8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2" Feb 12 20:25:06.390101 env[1799]: 2024-02-12 20:25:06.131 [INFO][3247] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2" iface="eth0" netns="/var/run/netns/cni-eb9aec4f-e7d5-9af1-4180-22ab88c08980" Feb 12 20:25:06.390101 env[1799]: 2024-02-12 20:25:06.132 [INFO][3247] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2" iface="eth0" netns="/var/run/netns/cni-eb9aec4f-e7d5-9af1-4180-22ab88c08980" Feb 12 20:25:06.390101 env[1799]: 2024-02-12 20:25:06.132 [INFO][3247] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2" iface="eth0" netns="/var/run/netns/cni-eb9aec4f-e7d5-9af1-4180-22ab88c08980" Feb 12 20:25:06.390101 env[1799]: 2024-02-12 20:25:06.132 [INFO][3247] k8s.go 585: Releasing IP address(es) ContainerID="8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2" Feb 12 20:25:06.390101 env[1799]: 2024-02-12 20:25:06.132 [INFO][3247] utils.go 188: Calico CNI releasing IP address ContainerID="8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2" Feb 12 20:25:06.390101 env[1799]: 2024-02-12 20:25:06.333 [INFO][3263] ipam_plugin.go 415: Releasing address using handleID ContainerID="8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2" HandleID="k8s-pod-network.8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2" Workload="172.31.16.103-k8s-csi--node--driver--mbm2f-eth0" Feb 12 20:25:06.390101 env[1799]: 2024-02-12 20:25:06.335 [INFO][3263] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 20:25:06.390101 env[1799]: 2024-02-12 20:25:06.335 [INFO][3263] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 20:25:06.390101 env[1799]: 2024-02-12 20:25:06.368 [WARNING][3263] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2" HandleID="k8s-pod-network.8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2" Workload="172.31.16.103-k8s-csi--node--driver--mbm2f-eth0" Feb 12 20:25:06.390101 env[1799]: 2024-02-12 20:25:06.374 [INFO][3263] ipam_plugin.go 443: Releasing address using workloadID ContainerID="8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2" HandleID="k8s-pod-network.8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2" Workload="172.31.16.103-k8s-csi--node--driver--mbm2f-eth0" Feb 12 20:25:06.390101 env[1799]: 2024-02-12 20:25:06.381 [INFO][3263] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 20:25:06.390101 env[1799]: 2024-02-12 20:25:06.384 [INFO][3247] k8s.go 591: Teardown processing complete. ContainerID="8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2" Feb 12 20:25:06.396299 systemd[1]: run-netns-cni\x2deb9aec4f\x2de7d5\x2d9af1\x2d4180\x2d22ab88c08980.mount: Deactivated successfully. Feb 12 20:25:06.398630 env[1799]: time="2024-02-12T20:25:06.398546169Z" level=info msg="TearDown network for sandbox \"8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2\" successfully" Feb 12 20:25:06.398630 env[1799]: time="2024-02-12T20:25:06.398617834Z" level=info msg="StopPodSandbox for \"8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2\" returns successfully" Feb 12 20:25:06.399808 env[1799]: time="2024-02-12T20:25:06.399728649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mbm2f,Uid:025e7f47-e5fc-44a0-ae5d-89a7aa729804,Namespace:calico-system,Attempt:1,}" Feb 12 20:25:06.671000 audit[3363]: AVC avc: denied { bpf } for pid=3363 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:25:06.671000 audit[3363]: AVC avc: denied { bpf } for pid=3363 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:25:06.671000 audit[3363]: AVC avc: denied { perfmon } for pid=3363 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:25:06.671000 audit[3363]: AVC avc: denied { perfmon } for pid=3363 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:25:06.671000 audit[3363]: AVC avc: denied { perfmon } for pid=3363 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:25:06.671000 audit[3363]: AVC avc: denied { perfmon } for pid=3363 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:25:06.671000 audit[3363]: AVC avc: denied { perfmon } for pid=3363 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:25:06.671000 audit[3363]: AVC avc: denied { bpf } for pid=3363 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:25:06.671000 audit[3363]: AVC avc: denied { bpf } for pid=3363 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:25:06.671000 audit: BPF prog-id=10 op=LOAD Feb 12 20:25:06.671000 audit[3363]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe5148cb8 a2=70 a3=0 items=0 ppid=3177 pid=3363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:06.671000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 12 20:25:06.671000 audit: BPF prog-id=10 op=UNLOAD Feb 12 20:25:06.671000 audit[3363]: AVC avc: denied { bpf } for pid=3363 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:25:06.671000 audit[3363]: AVC avc: denied { bpf } for pid=3363 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:25:06.671000 audit[3363]: AVC avc: denied { perfmon } for pid=3363 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:25:06.671000 audit[3363]: AVC avc: denied { perfmon } for pid=3363 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:25:06.671000 audit[3363]: AVC avc: denied { perfmon } for pid=3363 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:25:06.671000 audit[3363]: AVC avc: denied { perfmon } for pid=3363 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:25:06.671000 audit[3363]: AVC avc: denied { perfmon } for pid=3363 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:25:06.671000 audit[3363]: AVC avc: denied { bpf } for pid=3363 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:25:06.671000 audit[3363]: AVC avc: denied { bpf } for pid=3363 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:25:06.671000 audit: BPF prog-id=11 op=LOAD Feb 12 20:25:06.671000 audit[3363]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe5148cb8 a2=70 a3=4a174c items=0 ppid=3177 pid=3363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:06.671000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 12 20:25:06.671000 audit: BPF prog-id=11 op=UNLOAD Feb 12 20:25:06.671000 audit[3363]: AVC avc: denied { bpf } for pid=3363 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:25:06.671000 audit[3363]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=0 a1=ffffe5148ce8 a2=70 a3=1da1373f items=0 ppid=3177 pid=3363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:06.671000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 12 20:25:06.672000 audit[3363]: AVC avc: denied { bpf } for pid=3363 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:25:06.672000 audit[3363]: AVC avc: denied { bpf } for pid=3363 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:25:06.672000 audit[3363]: AVC avc: denied { bpf } for pid=3363 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:25:06.672000 audit[3363]: AVC avc: denied { perfmon } for pid=3363 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:25:06.672000 audit[3363]: AVC avc: denied { perfmon } for pid=3363 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:25:06.672000 audit[3363]: AVC avc: denied { perfmon } for pid=3363 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:25:06.672000 audit[3363]: AVC avc: denied { perfmon } for pid=3363 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:25:06.672000 audit[3363]: AVC avc: denied { perfmon } for pid=3363 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:25:06.672000 audit[3363]: AVC avc: denied { bpf } for pid=3363 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:25:06.672000 audit[3363]: AVC avc: denied { bpf } for pid=3363 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:25:06.672000 audit: BPF prog-id=12 op=LOAD Feb 12 20:25:06.672000 audit[3363]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe5148c38 a2=70 a3=1da13759 items=0 ppid=3177 pid=3363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:06.672000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 12 20:25:06.681000 audit[3368]: AVC avc: denied { bpf } for pid=3368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:25:06.681000 audit[3368]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffe4894228 a2=70 a3=0 items=0 ppid=3177 pid=3368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:06.681000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 12 20:25:06.682000 audit[3368]: AVC avc: denied { bpf } for pid=3368 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:25:06.682000 audit[3368]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffe4894108 a2=70 a3=2 items=0 ppid=3177 pid=3368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:06.682000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 12 20:25:06.676049 (udev-worker)[3110]: Network interface NamePolicy= disabled on kernel command line. Feb 12 20:25:06.693000 audit: BPF prog-id=12 op=UNLOAD Feb 12 20:25:06.694365 kubelet[2315]: E0212 20:25:06.681109 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:06.682531 (udev-worker)[3365]: Network interface NamePolicy= disabled on kernel command line. Feb 12 20:25:06.768093 systemd-networkd[1586]: cali100955a304e: Link UP Feb 12 20:25:06.776314 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali100955a304e: link becomes ready Feb 12 20:25:06.776846 systemd-networkd[1586]: cali100955a304e: Gained carrier Feb 12 20:25:06.780025 (udev-worker)[3379]: Network interface NamePolicy= disabled on kernel command line. Feb 12 20:25:06.818000 audit[3396]: NETFILTER_CFG table=mangle:79 family=2 entries=19 op=nft_register_chain pid=3396 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 20:25:06.818000 audit[3396]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6800 a0=3 a1=ffffd611fa70 a2=0 a3=ffff9ede3fa8 items=0 ppid=3177 pid=3396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:06.818000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 20:25:06.834762 env[1799]: 2024-02-12 20:25:06.552 [INFO][3322] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.16.103-k8s-csi--node--driver--mbm2f-eth0 csi-node-driver- calico-system 025e7f47-e5fc-44a0-ae5d-89a7aa729804 944 0 2024-02-12 20:24:33 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7c77f88967 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.16.103 csi-node-driver-mbm2f eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali100955a304e [] []}} ContainerID="e244591c98aa4c37027604d68ba5cc246a7b436bae21b9055575befc2584520c" Namespace="calico-system" Pod="csi-node-driver-mbm2f" WorkloadEndpoint="172.31.16.103-k8s-csi--node--driver--mbm2f-" Feb 12 20:25:06.834762 env[1799]: 2024-02-12 20:25:06.552 [INFO][3322] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="e244591c98aa4c37027604d68ba5cc246a7b436bae21b9055575befc2584520c" Namespace="calico-system" Pod="csi-node-driver-mbm2f" WorkloadEndpoint="172.31.16.103-k8s-csi--node--driver--mbm2f-eth0" Feb 12 20:25:06.834762 env[1799]: 2024-02-12 20:25:06.651 [INFO][3347] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e244591c98aa4c37027604d68ba5cc246a7b436bae21b9055575befc2584520c" HandleID="k8s-pod-network.e244591c98aa4c37027604d68ba5cc246a7b436bae21b9055575befc2584520c" Workload="172.31.16.103-k8s-csi--node--driver--mbm2f-eth0" Feb 12 20:25:06.834762 env[1799]: 2024-02-12 20:25:06.691 [INFO][3347] ipam_plugin.go 268: Auto assigning IP ContainerID="e244591c98aa4c37027604d68ba5cc246a7b436bae21b9055575befc2584520c" HandleID="k8s-pod-network.e244591c98aa4c37027604d68ba5cc246a7b436bae21b9055575befc2584520c" Workload="172.31.16.103-k8s-csi--node--driver--mbm2f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000507380), Attrs:map[string]string{"namespace":"calico-system", "node":"172.31.16.103", "pod":"csi-node-driver-mbm2f", "timestamp":"2024-02-12 20:25:06.651299043 +0000 UTC"}, Hostname:"172.31.16.103", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 12 20:25:06.834762 env[1799]: 2024-02-12 20:25:06.691 [INFO][3347] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 20:25:06.834762 env[1799]: 2024-02-12 20:25:06.691 [INFO][3347] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 20:25:06.834762 env[1799]: 2024-02-12 20:25:06.691 [INFO][3347] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.16.103' Feb 12 20:25:06.834762 env[1799]: 2024-02-12 20:25:06.695 [INFO][3347] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e244591c98aa4c37027604d68ba5cc246a7b436bae21b9055575befc2584520c" host="172.31.16.103" Feb 12 20:25:06.834762 env[1799]: 2024-02-12 20:25:06.705 [INFO][3347] ipam.go 372: Looking up existing affinities for host host="172.31.16.103" Feb 12 20:25:06.834762 env[1799]: 2024-02-12 20:25:06.714 [INFO][3347] ipam.go 489: Trying affinity for 192.168.78.192/26 host="172.31.16.103" Feb 12 20:25:06.834762 env[1799]: 2024-02-12 20:25:06.721 [INFO][3347] ipam.go 155: Attempting to load block cidr=192.168.78.192/26 host="172.31.16.103" Feb 12 20:25:06.834762 env[1799]: 2024-02-12 20:25:06.725 [INFO][3347] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.78.192/26 host="172.31.16.103" Feb 12 20:25:06.834762 env[1799]: 2024-02-12 20:25:06.726 [INFO][3347] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.78.192/26 handle="k8s-pod-network.e244591c98aa4c37027604d68ba5cc246a7b436bae21b9055575befc2584520c" host="172.31.16.103" Feb 12 20:25:06.834762 env[1799]: 2024-02-12 20:25:06.730 [INFO][3347] ipam.go 1682: Creating new handle: k8s-pod-network.e244591c98aa4c37027604d68ba5cc246a7b436bae21b9055575befc2584520c Feb 12 20:25:06.834762 env[1799]: 2024-02-12 20:25:06.738 [INFO][3347] ipam.go 1203: Writing block in order to claim IPs block=192.168.78.192/26 handle="k8s-pod-network.e244591c98aa4c37027604d68ba5cc246a7b436bae21b9055575befc2584520c" host="172.31.16.103" Feb 12 20:25:06.834762 env[1799]: 2024-02-12 20:25:06.751 [INFO][3347] ipam.go 1216: Successfully claimed IPs: [192.168.78.193/26] block=192.168.78.192/26 handle="k8s-pod-network.e244591c98aa4c37027604d68ba5cc246a7b436bae21b9055575befc2584520c" host="172.31.16.103" Feb 12 20:25:06.834762 env[1799]: 2024-02-12 20:25:06.751 [INFO][3347] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.78.193/26] handle="k8s-pod-network.e244591c98aa4c37027604d68ba5cc246a7b436bae21b9055575befc2584520c" host="172.31.16.103" Feb 12 20:25:06.834762 env[1799]: 2024-02-12 20:25:06.752 [INFO][3347] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 20:25:06.834762 env[1799]: 2024-02-12 20:25:06.752 [INFO][3347] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.78.193/26] IPv6=[] ContainerID="e244591c98aa4c37027604d68ba5cc246a7b436bae21b9055575befc2584520c" HandleID="k8s-pod-network.e244591c98aa4c37027604d68ba5cc246a7b436bae21b9055575befc2584520c" Workload="172.31.16.103-k8s-csi--node--driver--mbm2f-eth0" Feb 12 20:25:06.835000 audit[3394]: NETFILTER_CFG table=raw:80 family=2 entries=19 op=nft_register_chain pid=3394 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 20:25:06.835000 audit[3394]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6132 a0=3 a1=fffffd102ce0 a2=0 a3=ffff96c9afa8 items=0 ppid=3177 pid=3394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:06.835000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 20:25:06.836000 audit[3397]: NETFILTER_CFG table=nat:81 family=2 entries=16 op=nft_register_chain pid=3397 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 20:25:06.836000 audit[3397]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5188 a0=3 a1=ffffff817930 a2=0 a3=ffff88b60fa8 items=0 ppid=3177 pid=3397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:06.836000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 20:25:06.841748 env[1799]: 2024-02-12 20:25:06.759 [INFO][3322] k8s.go 385: Populated endpoint ContainerID="e244591c98aa4c37027604d68ba5cc246a7b436bae21b9055575befc2584520c" Namespace="calico-system" Pod="csi-node-driver-mbm2f" WorkloadEndpoint="172.31.16.103-k8s-csi--node--driver--mbm2f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.103-k8s-csi--node--driver--mbm2f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"025e7f47-e5fc-44a0-ae5d-89a7aa729804", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 20, 24, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.16.103", ContainerID:"", Pod:"csi-node-driver-mbm2f", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.78.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali100955a304e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 20:25:06.841748 env[1799]: 2024-02-12 20:25:06.759 [INFO][3322] k8s.go 386: Calico CNI using IPs: [192.168.78.193/32] ContainerID="e244591c98aa4c37027604d68ba5cc246a7b436bae21b9055575befc2584520c" Namespace="calico-system" Pod="csi-node-driver-mbm2f" WorkloadEndpoint="172.31.16.103-k8s-csi--node--driver--mbm2f-eth0" Feb 12 20:25:06.841748 env[1799]: 2024-02-12 20:25:06.760 [INFO][3322] dataplane_linux.go 68: Setting the host side veth name to cali100955a304e ContainerID="e244591c98aa4c37027604d68ba5cc246a7b436bae21b9055575befc2584520c" Namespace="calico-system" Pod="csi-node-driver-mbm2f" WorkloadEndpoint="172.31.16.103-k8s-csi--node--driver--mbm2f-eth0" Feb 12 20:25:06.841748 env[1799]: 2024-02-12 20:25:06.784 [INFO][3322] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="e244591c98aa4c37027604d68ba5cc246a7b436bae21b9055575befc2584520c" Namespace="calico-system" Pod="csi-node-driver-mbm2f" WorkloadEndpoint="172.31.16.103-k8s-csi--node--driver--mbm2f-eth0" Feb 12 20:25:06.841748 env[1799]: 2024-02-12 20:25:06.785 [INFO][3322] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="e244591c98aa4c37027604d68ba5cc246a7b436bae21b9055575befc2584520c" Namespace="calico-system" Pod="csi-node-driver-mbm2f" WorkloadEndpoint="172.31.16.103-k8s-csi--node--driver--mbm2f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.103-k8s-csi--node--driver--mbm2f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"025e7f47-e5fc-44a0-ae5d-89a7aa729804", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 20, 24, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.16.103", ContainerID:"e244591c98aa4c37027604d68ba5cc246a7b436bae21b9055575befc2584520c", Pod:"csi-node-driver-mbm2f", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.78.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali100955a304e", MAC:"12:12:c5:94:53:a5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 20:25:06.841748 env[1799]: 2024-02-12 20:25:06.816 [INFO][3322] k8s.go 491: Wrote updated endpoint to datastore ContainerID="e244591c98aa4c37027604d68ba5cc246a7b436bae21b9055575befc2584520c" Namespace="calico-system" Pod="csi-node-driver-mbm2f" WorkloadEndpoint="172.31.16.103-k8s-csi--node--driver--mbm2f-eth0" Feb 12 20:25:06.848000 audit[3400]: NETFILTER_CFG table=filter:82 family=2 entries=39 op=nft_register_chain pid=3400 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 20:25:06.848000 audit[3400]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=18472 a0=3 a1=fffffcbe30f0 a2=0 a3=ffff99dabfa8 items=0 ppid=3177 pid=3400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:06.848000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 20:25:06.886380 env[1799]: time="2024-02-12T20:25:06.886263368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:25:06.886722 env[1799]: time="2024-02-12T20:25:06.886614839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:25:06.887060 env[1799]: time="2024-02-12T20:25:06.886940870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:25:06.887851 env[1799]: time="2024-02-12T20:25:06.887747170Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e244591c98aa4c37027604d68ba5cc246a7b436bae21b9055575befc2584520c pid=3424 runtime=io.containerd.runc.v2 Feb 12 20:25:06.916000 audit[3431]: NETFILTER_CFG table=filter:83 family=2 entries=36 op=nft_register_chain pid=3431 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 20:25:06.916000 audit[3431]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19908 a0=3 a1=fffffd2f47e0 a2=0 a3=ffff8b9c0fa8 items=0 ppid=3177 pid=3431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:06.916000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 20:25:06.982916 env[1799]: time="2024-02-12T20:25:06.982847007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mbm2f,Uid:025e7f47-e5fc-44a0-ae5d-89a7aa729804,Namespace:calico-system,Attempt:1,} returns sandbox id \"e244591c98aa4c37027604d68ba5cc246a7b436bae21b9055575befc2584520c\"" Feb 12 20:25:06.986206 env[1799]: time="2024-02-12T20:25:06.986041318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\"" Feb 12 20:25:07.395410 systemd-networkd[1586]: vxlan.calico: Link UP Feb 12 20:25:07.395433 systemd-networkd[1586]: vxlan.calico: Gained carrier Feb 12 20:25:07.681879 kubelet[2315]: E0212 20:25:07.681602 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:08.096373 systemd-networkd[1586]: cali100955a304e: Gained IPv6LL Feb 12 20:25:08.682405 kubelet[2315]: E0212 20:25:08.682337 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:08.861334 env[1799]: time="2024-02-12T20:25:08.861244842Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:08.888456 env[1799]: time="2024-02-12T20:25:08.888395382Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4b71e7439e0eba34a97844591560a009f37e8e6c17a386a34d416c1cc872dee8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:08.925487 env[1799]: time="2024-02-12T20:25:08.925417527Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:08.964519 env[1799]: time="2024-02-12T20:25:08.964465230Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:2b9021393c17e87ba8a3c89f5b3719941812f4e4751caa0b71eb2233bff48738,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:08.965434 env[1799]: time="2024-02-12T20:25:08.965376230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\" returns image reference \"sha256:4b71e7439e0eba34a97844591560a009f37e8e6c17a386a34d416c1cc872dee8\"" Feb 12 20:25:08.969260 env[1799]: time="2024-02-12T20:25:08.969195850Z" level=info msg="CreateContainer within sandbox \"e244591c98aa4c37027604d68ba5cc246a7b436bae21b9055575befc2584520c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 12 20:25:09.186048 systemd-networkd[1586]: vxlan.calico: Gained IPv6LL Feb 12 20:25:09.421561 env[1799]: time="2024-02-12T20:25:09.421103007Z" level=info msg="CreateContainer within sandbox \"e244591c98aa4c37027604d68ba5cc246a7b436bae21b9055575befc2584520c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"28584656a89777c50a959ffd98dbbfebda1c421aeccff4c976ba9099023a020d\"" Feb 12 20:25:09.422350 env[1799]: time="2024-02-12T20:25:09.422242056Z" level=info msg="StartContainer for \"28584656a89777c50a959ffd98dbbfebda1c421aeccff4c976ba9099023a020d\"" Feb 12 20:25:09.474885 systemd[1]: run-containerd-runc-k8s.io-28584656a89777c50a959ffd98dbbfebda1c421aeccff4c976ba9099023a020d-runc.vfQ89q.mount: Deactivated successfully. Feb 12 20:25:09.558636 env[1799]: time="2024-02-12T20:25:09.558575101Z" level=info msg="StartContainer for \"28584656a89777c50a959ffd98dbbfebda1c421aeccff4c976ba9099023a020d\" returns successfully" Feb 12 20:25:09.560692 env[1799]: time="2024-02-12T20:25:09.560640605Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\"" Feb 12 20:25:09.683252 kubelet[2315]: E0212 20:25:09.683078 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:10.683266 kubelet[2315]: E0212 20:25:10.683202 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:11.263981 env[1799]: time="2024-02-12T20:25:11.263895427Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:11.267276 env[1799]: time="2024-02-12T20:25:11.267205254Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9dbda087e98c46610fb8629cf530f1fe49eee4b17d2afe455664ca446ec39d43,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:11.270290 env[1799]: time="2024-02-12T20:25:11.270210374Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:11.273672 env[1799]: time="2024-02-12T20:25:11.273604814Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:45a7aba6020a7cf7b866cb8a8d481b30c97e9b3407e1459aaa65a5b4cc06633a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:11.274754 env[1799]: time="2024-02-12T20:25:11.274692477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\" returns image reference \"sha256:9dbda087e98c46610fb8629cf530f1fe49eee4b17d2afe455664ca446ec39d43\"" Feb 12 20:25:11.279080 env[1799]: time="2024-02-12T20:25:11.279008319Z" level=info msg="CreateContainer within sandbox \"e244591c98aa4c37027604d68ba5cc246a7b436bae21b9055575befc2584520c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 12 20:25:11.303872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1383021898.mount: Deactivated successfully. Feb 12 20:25:11.311941 env[1799]: time="2024-02-12T20:25:11.311858929Z" level=info msg="CreateContainer within sandbox \"e244591c98aa4c37027604d68ba5cc246a7b436bae21b9055575befc2584520c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"9864d0cdb1450f94f69fbf607d343bf48f7b868e4ab98a13840fea2f335a8778\"" Feb 12 20:25:11.312951 env[1799]: time="2024-02-12T20:25:11.312879368Z" level=info msg="StartContainer for \"9864d0cdb1450f94f69fbf607d343bf48f7b868e4ab98a13840fea2f335a8778\"" Feb 12 20:25:11.447022 env[1799]: time="2024-02-12T20:25:11.446379353Z" level=info msg="StartContainer for \"9864d0cdb1450f94f69fbf607d343bf48f7b868e4ab98a13840fea2f335a8778\" returns successfully" Feb 12 20:25:11.684603 kubelet[2315]: E0212 20:25:11.684033 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:11.867499 kubelet[2315]: I0212 20:25:11.867444 2315 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 12 20:25:11.867700 kubelet[2315]: I0212 20:25:11.867516 2315 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 12 20:25:12.187821 kubelet[2315]: I0212 20:25:12.187734 2315 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-mbm2f" podStartSLOduration=-9.22337199766713e+09 pod.CreationTimestamp="2024-02-12 20:24:33 +0000 UTC" firstStartedPulling="2024-02-12 20:25:06.985557305 +0000 UTC m=+47.294664234" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:25:12.185413789 +0000 UTC m=+52.494520754" watchObservedRunningTime="2024-02-12 20:25:12.187646872 +0000 UTC m=+52.496753909" Feb 12 20:25:12.684704 kubelet[2315]: E0212 20:25:12.684655 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:13.686093 kubelet[2315]: E0212 20:25:13.686040 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:14.687634 kubelet[2315]: E0212 20:25:14.687586 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:15.689020 kubelet[2315]: E0212 20:25:15.688951 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:15.958942 env[1799]: time="2024-02-12T20:25:15.958603550Z" level=info msg="StopPodSandbox for \"11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8\"" Feb 12 20:25:16.091315 env[1799]: 2024-02-12 20:25:16.030 [INFO][3558] k8s.go 578: Cleaning up netns ContainerID="11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8" Feb 12 20:25:16.091315 env[1799]: 2024-02-12 20:25:16.030 [INFO][3558] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8" iface="eth0" netns="/var/run/netns/cni-0e0d7856-3ba7-31a6-090b-10adc2221f01" Feb 12 20:25:16.091315 env[1799]: 2024-02-12 20:25:16.031 [INFO][3558] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8" iface="eth0" netns="/var/run/netns/cni-0e0d7856-3ba7-31a6-090b-10adc2221f01" Feb 12 20:25:16.091315 env[1799]: 2024-02-12 20:25:16.031 [INFO][3558] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8" iface="eth0" netns="/var/run/netns/cni-0e0d7856-3ba7-31a6-090b-10adc2221f01" Feb 12 20:25:16.091315 env[1799]: 2024-02-12 20:25:16.031 [INFO][3558] k8s.go 585: Releasing IP address(es) ContainerID="11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8" Feb 12 20:25:16.091315 env[1799]: 2024-02-12 20:25:16.031 [INFO][3558] utils.go 188: Calico CNI releasing IP address ContainerID="11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8" Feb 12 20:25:16.091315 env[1799]: 2024-02-12 20:25:16.067 [INFO][3565] ipam_plugin.go 415: Releasing address using handleID ContainerID="11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8" HandleID="k8s-pod-network.11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8" Workload="172.31.16.103-k8s-nginx--deployment--8ffc5cf85--6tgdf-eth0" Feb 12 20:25:16.091315 env[1799]: 2024-02-12 20:25:16.067 [INFO][3565] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 20:25:16.091315 env[1799]: 2024-02-12 20:25:16.068 [INFO][3565] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 20:25:16.091315 env[1799]: 2024-02-12 20:25:16.083 [WARNING][3565] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8" HandleID="k8s-pod-network.11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8" Workload="172.31.16.103-k8s-nginx--deployment--8ffc5cf85--6tgdf-eth0" Feb 12 20:25:16.091315 env[1799]: 2024-02-12 20:25:16.083 [INFO][3565] ipam_plugin.go 443: Releasing address using workloadID ContainerID="11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8" HandleID="k8s-pod-network.11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8" Workload="172.31.16.103-k8s-nginx--deployment--8ffc5cf85--6tgdf-eth0" Feb 12 20:25:16.091315 env[1799]: 2024-02-12 20:25:16.086 [INFO][3565] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 20:25:16.091315 env[1799]: 2024-02-12 20:25:16.089 [INFO][3558] k8s.go 591: Teardown processing complete. ContainerID="11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8" Feb 12 20:25:16.095847 systemd[1]: run-netns-cni\x2d0e0d7856\x2d3ba7\x2d31a6\x2d090b\x2d10adc2221f01.mount: Deactivated successfully. Feb 12 20:25:16.097690 env[1799]: time="2024-02-12T20:25:16.097624402Z" level=info msg="TearDown network for sandbox \"11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8\" successfully" Feb 12 20:25:16.097873 env[1799]: time="2024-02-12T20:25:16.097835291Z" level=info msg="StopPodSandbox for \"11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8\" returns successfully" Feb 12 20:25:16.099329 env[1799]: time="2024-02-12T20:25:16.099250938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-6tgdf,Uid:195a7bd2-b313-41cb-9a9a-4c42180b67a9,Namespace:default,Attempt:1,}" Feb 12 20:25:16.313847 systemd-networkd[1586]: calie2c4ed265ee: Link UP Feb 12 20:25:16.318071 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 20:25:16.318222 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie2c4ed265ee: link becomes ready Feb 12 20:25:16.321709 systemd-networkd[1586]: calie2c4ed265ee: Gained carrier Feb 12 20:25:16.322018 (udev-worker)[3589]: Network interface NamePolicy= disabled on kernel command line. Feb 12 20:25:16.356576 env[1799]: 2024-02-12 20:25:16.189 [INFO][3575] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.16.103-k8s-nginx--deployment--8ffc5cf85--6tgdf-eth0 nginx-deployment-8ffc5cf85- default 195a7bd2-b313-41cb-9a9a-4c42180b67a9 981 0 2024-02-12 20:24:59 +0000 UTC map[app:nginx pod-template-hash:8ffc5cf85 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.16.103 nginx-deployment-8ffc5cf85-6tgdf eth0 default [] [] [kns.default ksa.default.default] calie2c4ed265ee [] []}} ContainerID="ab3e9ead9d4f2c7cc8da5f214d17488ae1edba55a7ccbda9798c4eec6d41686b" Namespace="default" Pod="nginx-deployment-8ffc5cf85-6tgdf" WorkloadEndpoint="172.31.16.103-k8s-nginx--deployment--8ffc5cf85--6tgdf-" Feb 12 20:25:16.356576 env[1799]: 2024-02-12 20:25:16.189 [INFO][3575] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="ab3e9ead9d4f2c7cc8da5f214d17488ae1edba55a7ccbda9798c4eec6d41686b" Namespace="default" Pod="nginx-deployment-8ffc5cf85-6tgdf" WorkloadEndpoint="172.31.16.103-k8s-nginx--deployment--8ffc5cf85--6tgdf-eth0" Feb 12 20:25:16.356576 env[1799]: 2024-02-12 20:25:16.239 [INFO][3583] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ab3e9ead9d4f2c7cc8da5f214d17488ae1edba55a7ccbda9798c4eec6d41686b" HandleID="k8s-pod-network.ab3e9ead9d4f2c7cc8da5f214d17488ae1edba55a7ccbda9798c4eec6d41686b" Workload="172.31.16.103-k8s-nginx--deployment--8ffc5cf85--6tgdf-eth0" Feb 12 20:25:16.356576 env[1799]: 2024-02-12 20:25:16.259 [INFO][3583] ipam_plugin.go 268: Auto assigning IP ContainerID="ab3e9ead9d4f2c7cc8da5f214d17488ae1edba55a7ccbda9798c4eec6d41686b" HandleID="k8s-pod-network.ab3e9ead9d4f2c7cc8da5f214d17488ae1edba55a7ccbda9798c4eec6d41686b" Workload="172.31.16.103-k8s-nginx--deployment--8ffc5cf85--6tgdf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fb8d0), Attrs:map[string]string{"namespace":"default", "node":"172.31.16.103", "pod":"nginx-deployment-8ffc5cf85-6tgdf", "timestamp":"2024-02-12 20:25:16.239269402 +0000 UTC"}, Hostname:"172.31.16.103", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 12 20:25:16.356576 env[1799]: 2024-02-12 20:25:16.259 [INFO][3583] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 20:25:16.356576 env[1799]: 2024-02-12 20:25:16.260 [INFO][3583] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 20:25:16.356576 env[1799]: 2024-02-12 20:25:16.260 [INFO][3583] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.16.103' Feb 12 20:25:16.356576 env[1799]: 2024-02-12 20:25:16.263 [INFO][3583] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ab3e9ead9d4f2c7cc8da5f214d17488ae1edba55a7ccbda9798c4eec6d41686b" host="172.31.16.103" Feb 12 20:25:16.356576 env[1799]: 2024-02-12 20:25:16.271 [INFO][3583] ipam.go 372: Looking up existing affinities for host host="172.31.16.103" Feb 12 20:25:16.356576 env[1799]: 2024-02-12 20:25:16.280 [INFO][3583] ipam.go 489: Trying affinity for 192.168.78.192/26 host="172.31.16.103" Feb 12 20:25:16.356576 env[1799]: 2024-02-12 20:25:16.283 [INFO][3583] ipam.go 155: Attempting to load block cidr=192.168.78.192/26 host="172.31.16.103" Feb 12 20:25:16.356576 env[1799]: 2024-02-12 20:25:16.286 [INFO][3583] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.78.192/26 host="172.31.16.103" Feb 12 20:25:16.356576 env[1799]: 2024-02-12 20:25:16.287 [INFO][3583] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.78.192/26 handle="k8s-pod-network.ab3e9ead9d4f2c7cc8da5f214d17488ae1edba55a7ccbda9798c4eec6d41686b" host="172.31.16.103" Feb 12 20:25:16.356576 env[1799]: 2024-02-12 20:25:16.289 [INFO][3583] ipam.go 1682: Creating new handle: k8s-pod-network.ab3e9ead9d4f2c7cc8da5f214d17488ae1edba55a7ccbda9798c4eec6d41686b Feb 12 20:25:16.356576 env[1799]: 2024-02-12 20:25:16.295 [INFO][3583] ipam.go 1203: Writing block in order to claim IPs block=192.168.78.192/26 handle="k8s-pod-network.ab3e9ead9d4f2c7cc8da5f214d17488ae1edba55a7ccbda9798c4eec6d41686b" host="172.31.16.103" Feb 12 20:25:16.356576 env[1799]: 2024-02-12 20:25:16.304 [INFO][3583] ipam.go 1216: Successfully claimed IPs: [192.168.78.194/26] block=192.168.78.192/26 handle="k8s-pod-network.ab3e9ead9d4f2c7cc8da5f214d17488ae1edba55a7ccbda9798c4eec6d41686b" host="172.31.16.103" Feb 12 20:25:16.356576 env[1799]: 2024-02-12 20:25:16.304 [INFO][3583] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.78.194/26] handle="k8s-pod-network.ab3e9ead9d4f2c7cc8da5f214d17488ae1edba55a7ccbda9798c4eec6d41686b" host="172.31.16.103" Feb 12 20:25:16.356576 env[1799]: 2024-02-12 20:25:16.304 [INFO][3583] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 20:25:16.356576 env[1799]: 2024-02-12 20:25:16.304 [INFO][3583] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.78.194/26] IPv6=[] ContainerID="ab3e9ead9d4f2c7cc8da5f214d17488ae1edba55a7ccbda9798c4eec6d41686b" HandleID="k8s-pod-network.ab3e9ead9d4f2c7cc8da5f214d17488ae1edba55a7ccbda9798c4eec6d41686b" Workload="172.31.16.103-k8s-nginx--deployment--8ffc5cf85--6tgdf-eth0" Feb 12 20:25:16.357897 env[1799]: 2024-02-12 20:25:16.307 [INFO][3575] k8s.go 385: Populated endpoint ContainerID="ab3e9ead9d4f2c7cc8da5f214d17488ae1edba55a7ccbda9798c4eec6d41686b" Namespace="default" Pod="nginx-deployment-8ffc5cf85-6tgdf" WorkloadEndpoint="172.31.16.103-k8s-nginx--deployment--8ffc5cf85--6tgdf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.103-k8s-nginx--deployment--8ffc5cf85--6tgdf-eth0", GenerateName:"nginx-deployment-8ffc5cf85-", Namespace:"default", SelfLink:"", UID:"195a7bd2-b313-41cb-9a9a-4c42180b67a9", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 20, 24, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8ffc5cf85", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.16.103", ContainerID:"", Pod:"nginx-deployment-8ffc5cf85-6tgdf", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.78.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calie2c4ed265ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 20:25:16.357897 env[1799]: 2024-02-12 20:25:16.308 [INFO][3575] k8s.go 386: Calico CNI using IPs: [192.168.78.194/32] ContainerID="ab3e9ead9d4f2c7cc8da5f214d17488ae1edba55a7ccbda9798c4eec6d41686b" Namespace="default" Pod="nginx-deployment-8ffc5cf85-6tgdf" WorkloadEndpoint="172.31.16.103-k8s-nginx--deployment--8ffc5cf85--6tgdf-eth0" Feb 12 20:25:16.357897 env[1799]: 2024-02-12 20:25:16.308 [INFO][3575] dataplane_linux.go 68: Setting the host side veth name to calie2c4ed265ee ContainerID="ab3e9ead9d4f2c7cc8da5f214d17488ae1edba55a7ccbda9798c4eec6d41686b" Namespace="default" Pod="nginx-deployment-8ffc5cf85-6tgdf" WorkloadEndpoint="172.31.16.103-k8s-nginx--deployment--8ffc5cf85--6tgdf-eth0" Feb 12 20:25:16.357897 env[1799]: 2024-02-12 20:25:16.324 [INFO][3575] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ab3e9ead9d4f2c7cc8da5f214d17488ae1edba55a7ccbda9798c4eec6d41686b" Namespace="default" Pod="nginx-deployment-8ffc5cf85-6tgdf" WorkloadEndpoint="172.31.16.103-k8s-nginx--deployment--8ffc5cf85--6tgdf-eth0" Feb 12 20:25:16.357897 env[1799]: 2024-02-12 20:25:16.326 [INFO][3575] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="ab3e9ead9d4f2c7cc8da5f214d17488ae1edba55a7ccbda9798c4eec6d41686b" Namespace="default" Pod="nginx-deployment-8ffc5cf85-6tgdf" WorkloadEndpoint="172.31.16.103-k8s-nginx--deployment--8ffc5cf85--6tgdf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.103-k8s-nginx--deployment--8ffc5cf85--6tgdf-eth0", GenerateName:"nginx-deployment-8ffc5cf85-", Namespace:"default", SelfLink:"", UID:"195a7bd2-b313-41cb-9a9a-4c42180b67a9", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 20, 24, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8ffc5cf85", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.16.103", ContainerID:"ab3e9ead9d4f2c7cc8da5f214d17488ae1edba55a7ccbda9798c4eec6d41686b", Pod:"nginx-deployment-8ffc5cf85-6tgdf", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.78.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calie2c4ed265ee", MAC:"ce:27:98:45:ec:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 20:25:16.357897 env[1799]: 2024-02-12 20:25:16.348 [INFO][3575] k8s.go 491: Wrote updated endpoint to datastore ContainerID="ab3e9ead9d4f2c7cc8da5f214d17488ae1edba55a7ccbda9798c4eec6d41686b" Namespace="default" Pod="nginx-deployment-8ffc5cf85-6tgdf" WorkloadEndpoint="172.31.16.103-k8s-nginx--deployment--8ffc5cf85--6tgdf-eth0" Feb 12 20:25:16.378000 audit[3602]: NETFILTER_CFG table=filter:84 family=2 entries=40 op=nft_register_chain pid=3602 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 20:25:16.381818 kernel: kauditd_printk_skb: 89 callbacks suppressed Feb 12 20:25:16.381949 kernel: audit: type=1325 audit(1707769516.378:258): table=filter:84 family=2 entries=40 op=nft_register_chain pid=3602 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 20:25:16.378000 audit[3602]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=21064 a0=3 a1=ffffe94c1cc0 a2=0 a3=ffffb7e23fa8 items=0 ppid=3177 pid=3602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:16.401272 kernel: audit: type=1300 audit(1707769516.378:258): arch=c00000b7 syscall=211 success=yes exit=21064 a0=3 a1=ffffe94c1cc0 a2=0 a3=ffffb7e23fa8 items=0 ppid=3177 pid=3602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:16.401355 env[1799]: time="2024-02-12T20:25:16.399239525Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:25:16.401355 env[1799]: time="2024-02-12T20:25:16.399386814Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:25:16.401355 env[1799]: time="2024-02-12T20:25:16.399454086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:25:16.401355 env[1799]: time="2024-02-12T20:25:16.399752131Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ab3e9ead9d4f2c7cc8da5f214d17488ae1edba55a7ccbda9798c4eec6d41686b pid=3612 runtime=io.containerd.runc.v2 Feb 12 20:25:16.378000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 20:25:16.409383 kernel: audit: type=1327 audit(1707769516.378:258): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 20:25:16.511150 env[1799]: time="2024-02-12T20:25:16.511092444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-6tgdf,Uid:195a7bd2-b313-41cb-9a9a-4c42180b67a9,Namespace:default,Attempt:1,} returns sandbox id \"ab3e9ead9d4f2c7cc8da5f214d17488ae1edba55a7ccbda9798c4eec6d41686b\"" Feb 12 20:25:16.514574 env[1799]: time="2024-02-12T20:25:16.514516781Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 12 20:25:16.690809 kubelet[2315]: E0212 20:25:16.690628 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:17.691389 kubelet[2315]: E0212 20:25:17.691319 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:18.273146 systemd-networkd[1586]: calie2c4ed265ee: Gained IPv6LL Feb 12 20:25:18.692597 kubelet[2315]: E0212 20:25:18.692448 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:19.693283 kubelet[2315]: E0212 20:25:19.693219 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:20.576145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3238092768.mount: Deactivated successfully. Feb 12 20:25:20.644206 kubelet[2315]: E0212 20:25:20.644105 2315 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:20.660843 env[1799]: time="2024-02-12T20:25:20.660729191Z" level=info msg="StopPodSandbox for \"11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8\"" Feb 12 20:25:20.693470 kubelet[2315]: E0212 20:25:20.693390 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:20.835691 env[1799]: 2024-02-12 20:25:20.757 [WARNING][3670] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.103-k8s-nginx--deployment--8ffc5cf85--6tgdf-eth0", GenerateName:"nginx-deployment-8ffc5cf85-", Namespace:"default", SelfLink:"", UID:"195a7bd2-b313-41cb-9a9a-4c42180b67a9", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 20, 24, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8ffc5cf85", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.16.103", ContainerID:"ab3e9ead9d4f2c7cc8da5f214d17488ae1edba55a7ccbda9798c4eec6d41686b", Pod:"nginx-deployment-8ffc5cf85-6tgdf", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.78.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calie2c4ed265ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 20:25:20.835691 env[1799]: 2024-02-12 20:25:20.757 [INFO][3670] k8s.go 578: Cleaning up netns ContainerID="11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8" Feb 12 20:25:20.835691 env[1799]: 2024-02-12 20:25:20.757 [INFO][3670] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8" iface="eth0" netns="" Feb 12 20:25:20.835691 env[1799]: 2024-02-12 20:25:20.757 [INFO][3670] k8s.go 585: Releasing IP address(es) ContainerID="11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8" Feb 12 20:25:20.835691 env[1799]: 2024-02-12 20:25:20.757 [INFO][3670] utils.go 188: Calico CNI releasing IP address ContainerID="11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8" Feb 12 20:25:20.835691 env[1799]: 2024-02-12 20:25:20.813 [INFO][3677] ipam_plugin.go 415: Releasing address using handleID ContainerID="11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8" HandleID="k8s-pod-network.11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8" Workload="172.31.16.103-k8s-nginx--deployment--8ffc5cf85--6tgdf-eth0" Feb 12 20:25:20.835691 env[1799]: 2024-02-12 20:25:20.813 [INFO][3677] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 20:25:20.835691 env[1799]: 2024-02-12 20:25:20.814 [INFO][3677] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 20:25:20.835691 env[1799]: 2024-02-12 20:25:20.828 [WARNING][3677] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8" HandleID="k8s-pod-network.11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8" Workload="172.31.16.103-k8s-nginx--deployment--8ffc5cf85--6tgdf-eth0" Feb 12 20:25:20.835691 env[1799]: 2024-02-12 20:25:20.828 [INFO][3677] ipam_plugin.go 443: Releasing address using workloadID ContainerID="11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8" HandleID="k8s-pod-network.11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8" Workload="172.31.16.103-k8s-nginx--deployment--8ffc5cf85--6tgdf-eth0" Feb 12 20:25:20.835691 env[1799]: 2024-02-12 20:25:20.831 [INFO][3677] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 20:25:20.835691 env[1799]: 2024-02-12 20:25:20.833 [INFO][3670] k8s.go 591: Teardown processing complete. ContainerID="11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8" Feb 12 20:25:20.835691 env[1799]: time="2024-02-12T20:25:20.835510669Z" level=info msg="TearDown network for sandbox \"11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8\" successfully" Feb 12 20:25:20.835691 env[1799]: time="2024-02-12T20:25:20.835563445Z" level=info msg="StopPodSandbox for \"11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8\" returns successfully" Feb 12 20:25:20.836925 env[1799]: time="2024-02-12T20:25:20.836695578Z" level=info msg="RemovePodSandbox for \"11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8\"" Feb 12 20:25:20.836925 env[1799]: time="2024-02-12T20:25:20.836750754Z" level=info msg="Forcibly stopping sandbox \"11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8\"" Feb 12 20:25:21.035170 env[1799]: 2024-02-12 20:25:20.974 [WARNING][3697] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.103-k8s-nginx--deployment--8ffc5cf85--6tgdf-eth0", GenerateName:"nginx-deployment-8ffc5cf85-", Namespace:"default", SelfLink:"", UID:"195a7bd2-b313-41cb-9a9a-4c42180b67a9", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 20, 24, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8ffc5cf85", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.16.103", ContainerID:"ab3e9ead9d4f2c7cc8da5f214d17488ae1edba55a7ccbda9798c4eec6d41686b", Pod:"nginx-deployment-8ffc5cf85-6tgdf", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.78.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calie2c4ed265ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 20:25:21.035170 env[1799]: 2024-02-12 20:25:20.975 [INFO][3697] k8s.go 578: Cleaning up netns ContainerID="11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8" Feb 12 20:25:21.035170 env[1799]: 2024-02-12 20:25:20.975 [INFO][3697] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8" iface="eth0" netns="" Feb 12 20:25:21.035170 env[1799]: 2024-02-12 20:25:20.975 [INFO][3697] k8s.go 585: Releasing IP address(es) ContainerID="11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8" Feb 12 20:25:21.035170 env[1799]: 2024-02-12 20:25:20.975 [INFO][3697] utils.go 188: Calico CNI releasing IP address ContainerID="11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8" Feb 12 20:25:21.035170 env[1799]: 2024-02-12 20:25:21.014 [INFO][3705] ipam_plugin.go 415: Releasing address using handleID ContainerID="11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8" HandleID="k8s-pod-network.11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8" Workload="172.31.16.103-k8s-nginx--deployment--8ffc5cf85--6tgdf-eth0" Feb 12 20:25:21.035170 env[1799]: 2024-02-12 20:25:21.014 [INFO][3705] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 20:25:21.035170 env[1799]: 2024-02-12 20:25:21.014 [INFO][3705] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 20:25:21.035170 env[1799]: 2024-02-12 20:25:21.028 [WARNING][3705] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8" HandleID="k8s-pod-network.11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8" Workload="172.31.16.103-k8s-nginx--deployment--8ffc5cf85--6tgdf-eth0" Feb 12 20:25:21.035170 env[1799]: 2024-02-12 20:25:21.028 [INFO][3705] ipam_plugin.go 443: Releasing address using workloadID ContainerID="11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8" HandleID="k8s-pod-network.11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8" Workload="172.31.16.103-k8s-nginx--deployment--8ffc5cf85--6tgdf-eth0" Feb 12 20:25:21.035170 env[1799]: 2024-02-12 20:25:21.031 [INFO][3705] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 20:25:21.035170 env[1799]: 2024-02-12 20:25:21.033 [INFO][3697] k8s.go 591: Teardown processing complete. ContainerID="11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8" Feb 12 20:25:21.036083 env[1799]: time="2024-02-12T20:25:21.035200792Z" level=info msg="TearDown network for sandbox \"11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8\" successfully" Feb 12 20:25:21.039404 env[1799]: time="2024-02-12T20:25:21.039335551Z" level=info msg="RemovePodSandbox \"11def28fd6d15495c8966bbf81e879667b7ec575f4a4b71b4676fc69c7a8fdb8\" returns successfully" Feb 12 20:25:21.040116 env[1799]: time="2024-02-12T20:25:21.040055457Z" level=info msg="StopPodSandbox for \"8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2\"" Feb 12 20:25:21.206874 env[1799]: 2024-02-12 20:25:21.139 [WARNING][3726] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.103-k8s-csi--node--driver--mbm2f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"025e7f47-e5fc-44a0-ae5d-89a7aa729804", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 20, 24, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.16.103", ContainerID:"e244591c98aa4c37027604d68ba5cc246a7b436bae21b9055575befc2584520c", Pod:"csi-node-driver-mbm2f", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.78.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali100955a304e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 20:25:21.206874 env[1799]: 2024-02-12 20:25:21.140 [INFO][3726] k8s.go 578: Cleaning up netns ContainerID="8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2" Feb 12 20:25:21.206874 env[1799]: 2024-02-12 20:25:21.140 [INFO][3726] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2" iface="eth0" netns="" Feb 12 20:25:21.206874 env[1799]: 2024-02-12 20:25:21.140 [INFO][3726] k8s.go 585: Releasing IP address(es) ContainerID="8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2" Feb 12 20:25:21.206874 env[1799]: 2024-02-12 20:25:21.140 [INFO][3726] utils.go 188: Calico CNI releasing IP address ContainerID="8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2" Feb 12 20:25:21.206874 env[1799]: 2024-02-12 20:25:21.180 [INFO][3732] ipam_plugin.go 415: Releasing address using handleID ContainerID="8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2" HandleID="k8s-pod-network.8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2" Workload="172.31.16.103-k8s-csi--node--driver--mbm2f-eth0" Feb 12 20:25:21.206874 env[1799]: 2024-02-12 20:25:21.181 [INFO][3732] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 20:25:21.206874 env[1799]: 2024-02-12 20:25:21.181 [INFO][3732] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 20:25:21.206874 env[1799]: 2024-02-12 20:25:21.199 [WARNING][3732] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2" HandleID="k8s-pod-network.8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2" Workload="172.31.16.103-k8s-csi--node--driver--mbm2f-eth0" Feb 12 20:25:21.206874 env[1799]: 2024-02-12 20:25:21.200 [INFO][3732] ipam_plugin.go 443: Releasing address using workloadID ContainerID="8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2" HandleID="k8s-pod-network.8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2" Workload="172.31.16.103-k8s-csi--node--driver--mbm2f-eth0" Feb 12 20:25:21.206874 env[1799]: 2024-02-12 20:25:21.202 [INFO][3732] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 20:25:21.206874 env[1799]: 2024-02-12 20:25:21.204 [INFO][3726] k8s.go 591: Teardown processing complete. ContainerID="8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2" Feb 12 20:25:21.206874 env[1799]: time="2024-02-12T20:25:21.206798677Z" level=info msg="TearDown network for sandbox \"8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2\" successfully" Feb 12 20:25:21.208793 env[1799]: time="2024-02-12T20:25:21.206844961Z" level=info msg="StopPodSandbox for \"8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2\" returns successfully" Feb 12 20:25:21.209179 env[1799]: time="2024-02-12T20:25:21.209117301Z" level=info msg="RemovePodSandbox for \"8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2\"" Feb 12 20:25:21.209301 env[1799]: time="2024-02-12T20:25:21.209182869Z" level=info msg="Forcibly stopping sandbox \"8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2\"" Feb 12 20:25:21.406268 env[1799]: 2024-02-12 20:25:21.302 [WARNING][3754] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.103-k8s-csi--node--driver--mbm2f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"025e7f47-e5fc-44a0-ae5d-89a7aa729804", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 20, 24, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.16.103", ContainerID:"e244591c98aa4c37027604d68ba5cc246a7b436bae21b9055575befc2584520c", Pod:"csi-node-driver-mbm2f", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.78.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali100955a304e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 20:25:21.406268 env[1799]: 2024-02-12 20:25:21.302 [INFO][3754] k8s.go 578: Cleaning up netns ContainerID="8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2" Feb 12 20:25:21.406268 env[1799]: 2024-02-12 20:25:21.303 [INFO][3754] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2" iface="eth0" netns="" Feb 12 20:25:21.406268 env[1799]: 2024-02-12 20:25:21.303 [INFO][3754] k8s.go 585: Releasing IP address(es) ContainerID="8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2" Feb 12 20:25:21.406268 env[1799]: 2024-02-12 20:25:21.303 [INFO][3754] utils.go 188: Calico CNI releasing IP address ContainerID="8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2" Feb 12 20:25:21.406268 env[1799]: 2024-02-12 20:25:21.349 [INFO][3760] ipam_plugin.go 415: Releasing address using handleID ContainerID="8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2" HandleID="k8s-pod-network.8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2" Workload="172.31.16.103-k8s-csi--node--driver--mbm2f-eth0" Feb 12 20:25:21.406268 env[1799]: 2024-02-12 20:25:21.349 [INFO][3760] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 20:25:21.406268 env[1799]: 2024-02-12 20:25:21.349 [INFO][3760] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 20:25:21.406268 env[1799]: 2024-02-12 20:25:21.372 [WARNING][3760] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2" HandleID="k8s-pod-network.8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2" Workload="172.31.16.103-k8s-csi--node--driver--mbm2f-eth0" Feb 12 20:25:21.406268 env[1799]: 2024-02-12 20:25:21.372 [INFO][3760] ipam_plugin.go 443: Releasing address using workloadID ContainerID="8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2" HandleID="k8s-pod-network.8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2" Workload="172.31.16.103-k8s-csi--node--driver--mbm2f-eth0" Feb 12 20:25:21.406268 env[1799]: 2024-02-12 20:25:21.401 [INFO][3760] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 20:25:21.406268 env[1799]: 2024-02-12 20:25:21.403 [INFO][3754] k8s.go 591: Teardown processing complete. ContainerID="8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2" Feb 12 20:25:21.407159 env[1799]: time="2024-02-12T20:25:21.406309898Z" level=info msg="TearDown network for sandbox \"8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2\" successfully" Feb 12 20:25:21.410447 env[1799]: time="2024-02-12T20:25:21.410377001Z" level=info msg="RemovePodSandbox \"8030365b2483e3f4e72a4cd62f311da5b700d7c2b9ac08579c8767da9cab8ba2\" returns successfully" Feb 12 20:25:21.694402 kubelet[2315]: E0212 20:25:21.694346 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:22.228511 env[1799]: time="2024-02-12T20:25:22.228442434Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:22.231701 env[1799]: time="2024-02-12T20:25:22.231631205Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:22.235263 env[1799]: time="2024-02-12T20:25:22.235195685Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:22.238683 env[1799]: time="2024-02-12T20:25:22.238618457Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:22.240635 env[1799]: time="2024-02-12T20:25:22.240582119Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b\"" Feb 12 20:25:22.244798 env[1799]: time="2024-02-12T20:25:22.244726129Z" level=info msg="CreateContainer within sandbox \"ab3e9ead9d4f2c7cc8da5f214d17488ae1edba55a7ccbda9798c4eec6d41686b\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 12 20:25:22.263294 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1972380166.mount: Deactivated successfully. Feb 12 20:25:22.273392 env[1799]: time="2024-02-12T20:25:22.273300722Z" level=info msg="CreateContainer within sandbox \"ab3e9ead9d4f2c7cc8da5f214d17488ae1edba55a7ccbda9798c4eec6d41686b\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"38f90079439f596de858ab87a516db4998d63c570ad661ba085785f836a68437\"" Feb 12 20:25:22.274371 env[1799]: time="2024-02-12T20:25:22.274317858Z" level=info msg="StartContainer for \"38f90079439f596de858ab87a516db4998d63c570ad661ba085785f836a68437\"" Feb 12 20:25:22.385244 env[1799]: time="2024-02-12T20:25:22.385178002Z" level=info msg="StartContainer for \"38f90079439f596de858ab87a516db4998d63c570ad661ba085785f836a68437\" returns successfully" Feb 12 20:25:22.695495 kubelet[2315]: E0212 20:25:22.695323 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:23.696151 kubelet[2315]: E0212 20:25:23.696093 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:24.697335 kubelet[2315]: E0212 20:25:24.697258 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:25.698222 kubelet[2315]: E0212 20:25:25.698153 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:26.699273 kubelet[2315]: E0212 20:25:26.699226 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:27.700523 kubelet[2315]: E0212 20:25:27.700457 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:28.700904 kubelet[2315]: E0212 20:25:28.700862 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:28.799833 systemd[1]: run-containerd-runc-k8s.io-4ca38e1b8ddf9f7d8593395fd22139e13e7ed1b42dfe64e98d32e21e90380184-runc.CfgIPZ.mount: Deactivated successfully. Feb 12 20:25:28.933062 kubelet[2315]: I0212 20:25:28.932990 2315 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-8ffc5cf85-6tgdf" podStartSLOduration=-9.223372006921879e+09 pod.CreationTimestamp="2024-02-12 20:24:59 +0000 UTC" firstStartedPulling="2024-02-12 20:25:16.51354852 +0000 UTC m=+56.822655449" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:25:23.220781962 +0000 UTC m=+63.529888915" watchObservedRunningTime="2024-02-12 20:25:28.932896715 +0000 UTC m=+69.242003680" Feb 12 20:25:29.702423 kubelet[2315]: E0212 20:25:29.702354 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:29.865790 kubelet[2315]: I0212 20:25:29.865730 2315 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:25:29.877000 audit[3878]: NETFILTER_CFG table=filter:85 family=2 entries=18 op=nft_register_rule pid=3878 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:25:29.877000 audit[3878]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10364 a0=3 a1=fffff34a0130 a2=0 a3=ffffb498b6c0 items=0 ppid=2591 pid=3878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:29.898139 kernel: audit: type=1325 audit(1707769529.877:259): table=filter:85 family=2 entries=18 op=nft_register_rule pid=3878 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:25:29.898263 kernel: audit: type=1300 audit(1707769529.877:259): arch=c00000b7 syscall=211 success=yes exit=10364 a0=3 a1=fffff34a0130 a2=0 a3=ffffb498b6c0 items=0 ppid=2591 pid=3878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:29.877000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:25:29.904493 kernel: audit: type=1327 audit(1707769529.877:259): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:25:29.887000 audit[3878]: NETFILTER_CFG table=nat:86 family=2 entries=78 op=nft_register_rule pid=3878 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:25:29.887000 audit[3878]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=fffff34a0130 a2=0 a3=ffffb498b6c0 items=0 ppid=2591 pid=3878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:29.913044 kernel: audit: type=1325 audit(1707769529.887:260): table=nat:86 family=2 entries=78 op=nft_register_rule pid=3878 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:25:29.887000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:25:29.925992 kernel: audit: type=1300 audit(1707769529.887:260): arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=fffff34a0130 a2=0 a3=ffffb498b6c0 items=0 ppid=2591 pid=3878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:29.926062 kernel: audit: type=1327 audit(1707769529.887:260): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:25:29.951561 kubelet[2315]: I0212 20:25:29.951522 2315 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/f33a9248-1bf5-421c-9199-236465743c9f-data\") pod \"nfs-server-provisioner-0\" (UID: \"f33a9248-1bf5-421c-9199-236465743c9f\") " pod="default/nfs-server-provisioner-0" Feb 12 20:25:29.951851 kubelet[2315]: I0212 20:25:29.951824 2315 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdgx5\" (UniqueName: \"kubernetes.io/projected/f33a9248-1bf5-421c-9199-236465743c9f-kube-api-access-rdgx5\") pod \"nfs-server-provisioner-0\" (UID: \"f33a9248-1bf5-421c-9199-236465743c9f\") " pod="default/nfs-server-provisioner-0" Feb 12 20:25:29.999000 audit[3907]: NETFILTER_CFG table=filter:87 family=2 entries=30 op=nft_register_rule pid=3907 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:25:29.999000 audit[3907]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10364 a0=3 a1=ffffd5334450 a2=0 a3=ffffb931e6c0 items=0 ppid=2591 pid=3907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:30.018975 kernel: audit: type=1325 audit(1707769529.999:261): table=filter:87 family=2 entries=30 op=nft_register_rule pid=3907 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:25:30.019109 kernel: audit: type=1300 audit(1707769529.999:261): arch=c00000b7 syscall=211 success=yes exit=10364 a0=3 a1=ffffd5334450 a2=0 a3=ffffb931e6c0 items=0 ppid=2591 pid=3907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:29.999000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:25:30.024853 kernel: audit: type=1327 audit(1707769529.999:261): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:25:30.002000 audit[3907]: NETFILTER_CFG table=nat:88 family=2 entries=78 op=nft_register_rule pid=3907 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:25:30.031027 kernel: audit: type=1325 audit(1707769530.002:262): table=nat:88 family=2 entries=78 op=nft_register_rule pid=3907 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:25:30.002000 audit[3907]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffd5334450 a2=0 a3=ffffb931e6c0 items=0 ppid=2591 pid=3907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:30.002000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:25:30.194397 env[1799]: time="2024-02-12T20:25:30.194304169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:f33a9248-1bf5-421c-9199-236465743c9f,Namespace:default,Attempt:0,}" Feb 12 20:25:30.398225 (udev-worker)[3881]: Network interface NamePolicy= disabled on kernel command line. Feb 12 20:25:30.405564 systemd-networkd[1586]: cali60e51b789ff: Link UP Feb 12 20:25:30.412690 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 20:25:30.412796 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali60e51b789ff: link becomes ready Feb 12 20:25:30.413113 systemd-networkd[1586]: cali60e51b789ff: Gained carrier Feb 12 20:25:30.434337 env[1799]: 2024-02-12 20:25:30.280 [INFO][3909] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.16.103-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default f33a9248-1bf5-421c-9199-236465743c9f 1040 0 2024-02-12 20:25:29 +0000 UTC map[app:nfs-server-provisioner chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.31.16.103 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="1b5009f9502d0731dc9cff40e5480f83202d9e46e451705b7c466c7f0c166731" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.16.103-k8s-nfs--server--provisioner--0-" Feb 12 20:25:30.434337 env[1799]: 2024-02-12 20:25:30.281 [INFO][3909] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="1b5009f9502d0731dc9cff40e5480f83202d9e46e451705b7c466c7f0c166731" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.16.103-k8s-nfs--server--provisioner--0-eth0" Feb 12 20:25:30.434337 env[1799]: 2024-02-12 20:25:30.326 [INFO][3921] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1b5009f9502d0731dc9cff40e5480f83202d9e46e451705b7c466c7f0c166731" HandleID="k8s-pod-network.1b5009f9502d0731dc9cff40e5480f83202d9e46e451705b7c466c7f0c166731" Workload="172.31.16.103-k8s-nfs--server--provisioner--0-eth0" Feb 12 20:25:30.434337 env[1799]: 2024-02-12 20:25:30.347 [INFO][3921] ipam_plugin.go 268: Auto assigning IP ContainerID="1b5009f9502d0731dc9cff40e5480f83202d9e46e451705b7c466c7f0c166731" HandleID="k8s-pod-network.1b5009f9502d0731dc9cff40e5480f83202d9e46e451705b7c466c7f0c166731" Workload="172.31.16.103-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400021f8d0), Attrs:map[string]string{"namespace":"default", "node":"172.31.16.103", "pod":"nfs-server-provisioner-0", "timestamp":"2024-02-12 20:25:30.326533433 +0000 UTC"}, Hostname:"172.31.16.103", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 12 20:25:30.434337 env[1799]: 2024-02-12 20:25:30.347 [INFO][3921] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 20:25:30.434337 env[1799]: 2024-02-12 20:25:30.347 [INFO][3921] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 20:25:30.434337 env[1799]: 2024-02-12 20:25:30.347 [INFO][3921] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.16.103' Feb 12 20:25:30.434337 env[1799]: 2024-02-12 20:25:30.350 [INFO][3921] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1b5009f9502d0731dc9cff40e5480f83202d9e46e451705b7c466c7f0c166731" host="172.31.16.103" Feb 12 20:25:30.434337 env[1799]: 2024-02-12 20:25:30.359 [INFO][3921] ipam.go 372: Looking up existing affinities for host host="172.31.16.103" Feb 12 20:25:30.434337 env[1799]: 2024-02-12 20:25:30.367 [INFO][3921] ipam.go 489: Trying affinity for 192.168.78.192/26 host="172.31.16.103" Feb 12 20:25:30.434337 env[1799]: 2024-02-12 20:25:30.371 [INFO][3921] ipam.go 155: Attempting to load block cidr=192.168.78.192/26 host="172.31.16.103" Feb 12 20:25:30.434337 env[1799]: 2024-02-12 20:25:30.375 [INFO][3921] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.78.192/26 host="172.31.16.103" Feb 12 20:25:30.434337 env[1799]: 2024-02-12 20:25:30.375 [INFO][3921] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.78.192/26 handle="k8s-pod-network.1b5009f9502d0731dc9cff40e5480f83202d9e46e451705b7c466c7f0c166731" host="172.31.16.103" Feb 12 20:25:30.434337 env[1799]: 2024-02-12 20:25:30.378 [INFO][3921] ipam.go 1682: Creating new handle: k8s-pod-network.1b5009f9502d0731dc9cff40e5480f83202d9e46e451705b7c466c7f0c166731 Feb 12 20:25:30.434337 env[1799]: 2024-02-12 20:25:30.383 [INFO][3921] ipam.go 1203: Writing block in order to claim IPs block=192.168.78.192/26 handle="k8s-pod-network.1b5009f9502d0731dc9cff40e5480f83202d9e46e451705b7c466c7f0c166731" host="172.31.16.103" Feb 12 20:25:30.434337 env[1799]: 2024-02-12 20:25:30.391 [INFO][3921] ipam.go 1216: Successfully claimed IPs: [192.168.78.195/26] block=192.168.78.192/26 handle="k8s-pod-network.1b5009f9502d0731dc9cff40e5480f83202d9e46e451705b7c466c7f0c166731" host="172.31.16.103" Feb 12 20:25:30.434337 env[1799]: 2024-02-12 20:25:30.392 [INFO][3921] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.78.195/26] handle="k8s-pod-network.1b5009f9502d0731dc9cff40e5480f83202d9e46e451705b7c466c7f0c166731" host="172.31.16.103" Feb 12 20:25:30.434337 env[1799]: 2024-02-12 20:25:30.392 [INFO][3921] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 20:25:30.434337 env[1799]: 2024-02-12 20:25:30.392 [INFO][3921] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.78.195/26] IPv6=[] ContainerID="1b5009f9502d0731dc9cff40e5480f83202d9e46e451705b7c466c7f0c166731" HandleID="k8s-pod-network.1b5009f9502d0731dc9cff40e5480f83202d9e46e451705b7c466c7f0c166731" Workload="172.31.16.103-k8s-nfs--server--provisioner--0-eth0" Feb 12 20:25:30.435621 env[1799]: 2024-02-12 20:25:30.394 [INFO][3909] k8s.go 385: Populated endpoint ContainerID="1b5009f9502d0731dc9cff40e5480f83202d9e46e451705b7c466c7f0c166731" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.16.103-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.103-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"f33a9248-1bf5-421c-9199-236465743c9f", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 20, 25, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.16.103", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.78.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 20:25:30.435621 env[1799]: 2024-02-12 20:25:30.395 [INFO][3909] k8s.go 386: Calico CNI using IPs: [192.168.78.195/32] ContainerID="1b5009f9502d0731dc9cff40e5480f83202d9e46e451705b7c466c7f0c166731" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.16.103-k8s-nfs--server--provisioner--0-eth0" Feb 12 20:25:30.435621 env[1799]: 2024-02-12 20:25:30.395 [INFO][3909] dataplane_linux.go 68: Setting the host side veth name to cali60e51b789ff ContainerID="1b5009f9502d0731dc9cff40e5480f83202d9e46e451705b7c466c7f0c166731" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.16.103-k8s-nfs--server--provisioner--0-eth0" Feb 12 20:25:30.435621 env[1799]: 2024-02-12 20:25:30.420 [INFO][3909] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="1b5009f9502d0731dc9cff40e5480f83202d9e46e451705b7c466c7f0c166731" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.16.103-k8s-nfs--server--provisioner--0-eth0" Feb 12 20:25:30.436104 env[1799]: 2024-02-12 20:25:30.421 [INFO][3909] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="1b5009f9502d0731dc9cff40e5480f83202d9e46e451705b7c466c7f0c166731" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.16.103-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.103-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"f33a9248-1bf5-421c-9199-236465743c9f", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 20, 25, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.16.103", ContainerID:"1b5009f9502d0731dc9cff40e5480f83202d9e46e451705b7c466c7f0c166731", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.78.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"6e:30:00:97:4e:eb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 20:25:30.436104 env[1799]: 2024-02-12 20:25:30.431 [INFO][3909] k8s.go 491: Wrote updated endpoint to datastore ContainerID="1b5009f9502d0731dc9cff40e5480f83202d9e46e451705b7c466c7f0c166731" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.16.103-k8s-nfs--server--provisioner--0-eth0" Feb 12 20:25:30.476302 env[1799]: time="2024-02-12T20:25:30.476165108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:25:30.476454 env[1799]: time="2024-02-12T20:25:30.476356665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:25:30.476553 env[1799]: time="2024-02-12T20:25:30.476442057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:25:30.476866 env[1799]: time="2024-02-12T20:25:30.476785690Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1b5009f9502d0731dc9cff40e5480f83202d9e46e451705b7c466c7f0c166731 pid=3950 runtime=io.containerd.runc.v2 Feb 12 20:25:30.478000 audit[3955]: NETFILTER_CFG table=filter:89 family=2 entries=38 op=nft_register_chain pid=3955 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 20:25:30.478000 audit[3955]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19500 a0=3 a1=ffffc2180f70 a2=0 a3=ffff9afdffa8 items=0 ppid=3177 pid=3955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:30.478000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 20:25:30.587539 env[1799]: time="2024-02-12T20:25:30.587483514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:f33a9248-1bf5-421c-9199-236465743c9f,Namespace:default,Attempt:0,} returns sandbox id \"1b5009f9502d0731dc9cff40e5480f83202d9e46e451705b7c466c7f0c166731\"" Feb 12 20:25:30.590303 env[1799]: time="2024-02-12T20:25:30.590254728Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 12 20:25:30.704980 kubelet[2315]: E0212 20:25:30.703199 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:31.074089 systemd[1]: run-containerd-runc-k8s.io-1b5009f9502d0731dc9cff40e5480f83202d9e46e451705b7c466c7f0c166731-runc.1VKa9W.mount: Deactivated successfully. Feb 12 20:25:31.584305 systemd-networkd[1586]: cali60e51b789ff: Gained IPv6LL Feb 12 20:25:31.703675 kubelet[2315]: E0212 20:25:31.703607 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:32.704764 kubelet[2315]: E0212 20:25:32.704696 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:33.705087 kubelet[2315]: E0212 20:25:33.705018 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:33.932842 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount326536665.mount: Deactivated successfully. Feb 12 20:25:34.369338 kubelet[2315]: I0212 20:25:34.369282 2315 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:25:34.447000 audit[4012]: NETFILTER_CFG table=filter:90 family=2 entries=31 op=nft_register_rule pid=4012 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:25:34.447000 audit[4012]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11068 a0=3 a1=fffff6164a50 a2=0 a3=ffffa06b76c0 items=0 ppid=2591 pid=4012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:34.447000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:25:34.452000 audit[4012]: NETFILTER_CFG table=nat:91 family=2 entries=78 op=nft_register_rule pid=4012 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:25:34.452000 audit[4012]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=fffff6164a50 a2=0 a3=ffffa06b76c0 items=0 ppid=2591 pid=4012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:34.452000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:25:34.485737 kubelet[2315]: I0212 20:25:34.485674 2315 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzvbq\" (UniqueName: \"kubernetes.io/projected/45fd5d37-f3d3-4251-baba-50c693b76af2-kube-api-access-hzvbq\") pod \"calico-apiserver-7d5f545cdf-hschc\" (UID: \"45fd5d37-f3d3-4251-baba-50c693b76af2\") " pod="calico-apiserver/calico-apiserver-7d5f545cdf-hschc" Feb 12 20:25:34.485933 kubelet[2315]: I0212 20:25:34.485794 2315 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/45fd5d37-f3d3-4251-baba-50c693b76af2-calico-apiserver-certs\") pod \"calico-apiserver-7d5f545cdf-hschc\" (UID: \"45fd5d37-f3d3-4251-baba-50c693b76af2\") " pod="calico-apiserver/calico-apiserver-7d5f545cdf-hschc" Feb 12 20:25:34.539000 audit[4038]: NETFILTER_CFG table=filter:92 family=2 entries=32 op=nft_register_rule pid=4038 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:25:34.539000 audit[4038]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11068 a0=3 a1=fffffe1e0080 a2=0 a3=ffffa9eee6c0 items=0 ppid=2591 pid=4038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:34.539000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:25:34.545000 audit[4038]: NETFILTER_CFG table=nat:93 family=2 entries=78 op=nft_register_rule pid=4038 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:25:34.545000 audit[4038]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=fffffe1e0080 a2=0 a3=ffffa9eee6c0 items=0 ppid=2591 pid=4038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:34.545000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:25:34.587520 kubelet[2315]: E0212 20:25:34.587448 2315 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Feb 12 20:25:34.588223 kubelet[2315]: E0212 20:25:34.588180 2315 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45fd5d37-f3d3-4251-baba-50c693b76af2-calico-apiserver-certs podName:45fd5d37-f3d3-4251-baba-50c693b76af2 nodeName:}" failed. No retries permitted until 2024-02-12 20:25:35.08786694 +0000 UTC m=+75.396973881 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/45fd5d37-f3d3-4251-baba-50c693b76af2-calico-apiserver-certs") pod "calico-apiserver-7d5f545cdf-hschc" (UID: "45fd5d37-f3d3-4251-baba-50c693b76af2") : secret "calico-apiserver-certs" not found Feb 12 20:25:34.706998 kubelet[2315]: E0212 20:25:34.706127 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:35.277196 env[1799]: time="2024-02-12T20:25:35.277107013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d5f545cdf-hschc,Uid:45fd5d37-f3d3-4251-baba-50c693b76af2,Namespace:calico-apiserver,Attempt:0,}" Feb 12 20:25:35.673857 (udev-worker)[4063]: Network interface NamePolicy= disabled on kernel command line. Feb 12 20:25:35.677289 systemd-networkd[1586]: cali3e3f1b5e1ee: Link UP Feb 12 20:25:35.682022 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 20:25:35.682188 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali3e3f1b5e1ee: link becomes ready Feb 12 20:25:35.687533 systemd-networkd[1586]: cali3e3f1b5e1ee: Gained carrier Feb 12 20:25:35.709015 kubelet[2315]: E0212 20:25:35.708939 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:35.728050 env[1799]: 2024-02-12 20:25:35.498 [INFO][4041] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.16.103-k8s-calico--apiserver--7d5f545cdf--hschc-eth0 calico-apiserver-7d5f545cdf- calico-apiserver 45fd5d37-f3d3-4251-baba-50c693b76af2 1098 0 2024-02-12 20:25:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d5f545cdf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172.31.16.103 calico-apiserver-7d5f545cdf-hschc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3e3f1b5e1ee [] []}} ContainerID="53be16816d0d370bf84fce31b862329cd471b115a9d372b3d28b8385990a32d9" Namespace="calico-apiserver" Pod="calico-apiserver-7d5f545cdf-hschc" WorkloadEndpoint="172.31.16.103-k8s-calico--apiserver--7d5f545cdf--hschc-" Feb 12 20:25:35.728050 env[1799]: 2024-02-12 20:25:35.498 [INFO][4041] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="53be16816d0d370bf84fce31b862329cd471b115a9d372b3d28b8385990a32d9" Namespace="calico-apiserver" Pod="calico-apiserver-7d5f545cdf-hschc" WorkloadEndpoint="172.31.16.103-k8s-calico--apiserver--7d5f545cdf--hschc-eth0" Feb 12 20:25:35.728050 env[1799]: 2024-02-12 20:25:35.565 [INFO][4057] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="53be16816d0d370bf84fce31b862329cd471b115a9d372b3d28b8385990a32d9" HandleID="k8s-pod-network.53be16816d0d370bf84fce31b862329cd471b115a9d372b3d28b8385990a32d9" Workload="172.31.16.103-k8s-calico--apiserver--7d5f545cdf--hschc-eth0" Feb 12 20:25:35.728050 env[1799]: 2024-02-12 20:25:35.592 [INFO][4057] ipam_plugin.go 268: Auto assigning IP ContainerID="53be16816d0d370bf84fce31b862329cd471b115a9d372b3d28b8385990a32d9" HandleID="k8s-pod-network.53be16816d0d370bf84fce31b862329cd471b115a9d372b3d28b8385990a32d9" Workload="172.31.16.103-k8s-calico--apiserver--7d5f545cdf--hschc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400023b8f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172.31.16.103", "pod":"calico-apiserver-7d5f545cdf-hschc", "timestamp":"2024-02-12 20:25:35.565252 +0000 UTC"}, Hostname:"172.31.16.103", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 12 20:25:35.728050 env[1799]: 2024-02-12 20:25:35.593 [INFO][4057] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 20:25:35.728050 env[1799]: 2024-02-12 20:25:35.593 [INFO][4057] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 20:25:35.728050 env[1799]: 2024-02-12 20:25:35.593 [INFO][4057] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.16.103' Feb 12 20:25:35.728050 env[1799]: 2024-02-12 20:25:35.596 [INFO][4057] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.53be16816d0d370bf84fce31b862329cd471b115a9d372b3d28b8385990a32d9" host="172.31.16.103" Feb 12 20:25:35.728050 env[1799]: 2024-02-12 20:25:35.607 [INFO][4057] ipam.go 372: Looking up existing affinities for host host="172.31.16.103" Feb 12 20:25:35.728050 env[1799]: 2024-02-12 20:25:35.617 [INFO][4057] ipam.go 489: Trying affinity for 192.168.78.192/26 host="172.31.16.103" Feb 12 20:25:35.728050 env[1799]: 2024-02-12 20:25:35.622 [INFO][4057] ipam.go 155: Attempting to load block cidr=192.168.78.192/26 host="172.31.16.103" Feb 12 20:25:35.728050 env[1799]: 2024-02-12 20:25:35.626 [INFO][4057] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.78.192/26 host="172.31.16.103" Feb 12 20:25:35.728050 env[1799]: 2024-02-12 20:25:35.626 [INFO][4057] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.78.192/26 handle="k8s-pod-network.53be16816d0d370bf84fce31b862329cd471b115a9d372b3d28b8385990a32d9" host="172.31.16.103" Feb 12 20:25:35.728050 env[1799]: 2024-02-12 20:25:35.629 [INFO][4057] ipam.go 1682: Creating new handle: k8s-pod-network.53be16816d0d370bf84fce31b862329cd471b115a9d372b3d28b8385990a32d9 Feb 12 20:25:35.728050 env[1799]: 2024-02-12 20:25:35.639 [INFO][4057] ipam.go 1203: Writing block in order to claim IPs block=192.168.78.192/26 handle="k8s-pod-network.53be16816d0d370bf84fce31b862329cd471b115a9d372b3d28b8385990a32d9" host="172.31.16.103" Feb 12 20:25:35.728050 env[1799]: 2024-02-12 20:25:35.654 [INFO][4057] ipam.go 1216: Successfully claimed IPs: [192.168.78.196/26] block=192.168.78.192/26 handle="k8s-pod-network.53be16816d0d370bf84fce31b862329cd471b115a9d372b3d28b8385990a32d9" host="172.31.16.103" Feb 12 20:25:35.728050 env[1799]: 2024-02-12 20:25:35.654 [INFO][4057] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.78.196/26] handle="k8s-pod-network.53be16816d0d370bf84fce31b862329cd471b115a9d372b3d28b8385990a32d9" host="172.31.16.103" Feb 12 20:25:35.728050 env[1799]: 2024-02-12 20:25:35.654 [INFO][4057] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 20:25:35.728050 env[1799]: 2024-02-12 20:25:35.654 [INFO][4057] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.78.196/26] IPv6=[] ContainerID="53be16816d0d370bf84fce31b862329cd471b115a9d372b3d28b8385990a32d9" HandleID="k8s-pod-network.53be16816d0d370bf84fce31b862329cd471b115a9d372b3d28b8385990a32d9" Workload="172.31.16.103-k8s-calico--apiserver--7d5f545cdf--hschc-eth0" Feb 12 20:25:35.729394 env[1799]: 2024-02-12 20:25:35.658 [INFO][4041] k8s.go 385: Populated endpoint ContainerID="53be16816d0d370bf84fce31b862329cd471b115a9d372b3d28b8385990a32d9" Namespace="calico-apiserver" Pod="calico-apiserver-7d5f545cdf-hschc" WorkloadEndpoint="172.31.16.103-k8s-calico--apiserver--7d5f545cdf--hschc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.103-k8s-calico--apiserver--7d5f545cdf--hschc-eth0", GenerateName:"calico-apiserver-7d5f545cdf-", Namespace:"calico-apiserver", SelfLink:"", UID:"45fd5d37-f3d3-4251-baba-50c693b76af2", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 20, 25, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d5f545cdf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.16.103", ContainerID:"", Pod:"calico-apiserver-7d5f545cdf-hschc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.78.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3e3f1b5e1ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 20:25:35.729394 env[1799]: 2024-02-12 20:25:35.658 [INFO][4041] k8s.go 386: Calico CNI using IPs: [192.168.78.196/32] ContainerID="53be16816d0d370bf84fce31b862329cd471b115a9d372b3d28b8385990a32d9" Namespace="calico-apiserver" Pod="calico-apiserver-7d5f545cdf-hschc" WorkloadEndpoint="172.31.16.103-k8s-calico--apiserver--7d5f545cdf--hschc-eth0" Feb 12 20:25:35.729394 env[1799]: 2024-02-12 20:25:35.658 [INFO][4041] dataplane_linux.go 68: Setting the host side veth name to cali3e3f1b5e1ee ContainerID="53be16816d0d370bf84fce31b862329cd471b115a9d372b3d28b8385990a32d9" Namespace="calico-apiserver" Pod="calico-apiserver-7d5f545cdf-hschc" WorkloadEndpoint="172.31.16.103-k8s-calico--apiserver--7d5f545cdf--hschc-eth0" Feb 12 20:25:35.729394 env[1799]: 2024-02-12 20:25:35.688 [INFO][4041] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="53be16816d0d370bf84fce31b862329cd471b115a9d372b3d28b8385990a32d9" Namespace="calico-apiserver" Pod="calico-apiserver-7d5f545cdf-hschc" WorkloadEndpoint="172.31.16.103-k8s-calico--apiserver--7d5f545cdf--hschc-eth0" Feb 12 20:25:35.729394 env[1799]: 2024-02-12 20:25:35.690 [INFO][4041] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="53be16816d0d370bf84fce31b862329cd471b115a9d372b3d28b8385990a32d9" Namespace="calico-apiserver" Pod="calico-apiserver-7d5f545cdf-hschc" WorkloadEndpoint="172.31.16.103-k8s-calico--apiserver--7d5f545cdf--hschc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.103-k8s-calico--apiserver--7d5f545cdf--hschc-eth0", GenerateName:"calico-apiserver-7d5f545cdf-", Namespace:"calico-apiserver", SelfLink:"", UID:"45fd5d37-f3d3-4251-baba-50c693b76af2", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 20, 25, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d5f545cdf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.16.103", ContainerID:"53be16816d0d370bf84fce31b862329cd471b115a9d372b3d28b8385990a32d9", Pod:"calico-apiserver-7d5f545cdf-hschc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.78.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3e3f1b5e1ee", MAC:"76:37:e9:10:ca:a9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 20:25:35.729394 env[1799]: 2024-02-12 20:25:35.722 [INFO][4041] k8s.go 491: Wrote updated endpoint to datastore ContainerID="53be16816d0d370bf84fce31b862329cd471b115a9d372b3d28b8385990a32d9" Namespace="calico-apiserver" Pod="calico-apiserver-7d5f545cdf-hschc" WorkloadEndpoint="172.31.16.103-k8s-calico--apiserver--7d5f545cdf--hschc-eth0" Feb 12 20:25:35.849000 audit[4089]: NETFILTER_CFG table=filter:94 family=2 entries=55 op=nft_register_chain pid=4089 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 20:25:35.853573 kernel: kauditd_printk_skb: 17 callbacks suppressed Feb 12 20:25:35.853715 kernel: audit: type=1325 audit(1707769535.849:268): table=filter:94 family=2 entries=55 op=nft_register_chain pid=4089 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 20:25:35.861259 env[1799]: time="2024-02-12T20:25:35.859474816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:25:35.861259 env[1799]: time="2024-02-12T20:25:35.859600228Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:25:35.861259 env[1799]: time="2024-02-12T20:25:35.859628128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:25:35.861259 env[1799]: time="2024-02-12T20:25:35.859919021Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/53be16816d0d370bf84fce31b862329cd471b115a9d372b3d28b8385990a32d9 pid=4088 runtime=io.containerd.runc.v2 Feb 12 20:25:35.849000 audit[4089]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=28104 a0=3 a1=ffffe8561c90 a2=0 a3=ffff8f1cdfa8 items=0 ppid=3177 pid=4089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:35.849000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 20:25:35.890660 kernel: audit: type=1300 audit(1707769535.849:268): arch=c00000b7 syscall=211 success=yes exit=28104 a0=3 a1=ffffe8561c90 a2=0 a3=ffff8f1cdfa8 items=0 ppid=3177 pid=4089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:35.890813 kernel: audit: type=1327 audit(1707769535.849:268): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 20:25:36.009594 env[1799]: time="2024-02-12T20:25:36.009520516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d5f545cdf-hschc,Uid:45fd5d37-f3d3-4251-baba-50c693b76af2,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"53be16816d0d370bf84fce31b862329cd471b115a9d372b3d28b8385990a32d9\"" Feb 12 20:25:36.710084 kubelet[2315]: E0212 20:25:36.710022 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:37.600207 systemd-networkd[1586]: cali3e3f1b5e1ee: Gained IPv6LL Feb 12 20:25:37.711003 kubelet[2315]: E0212 20:25:37.710924 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:37.906320 env[1799]: time="2024-02-12T20:25:37.905998923Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:37.911240 env[1799]: time="2024-02-12T20:25:37.911176917Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:37.916068 env[1799]: time="2024-02-12T20:25:37.914946842Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:37.918875 env[1799]: time="2024-02-12T20:25:37.918824095Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:37.920851 env[1799]: time="2024-02-12T20:25:37.920799982Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Feb 12 20:25:37.922505 env[1799]: time="2024-02-12T20:25:37.922456380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\"" Feb 12 20:25:37.927734 env[1799]: time="2024-02-12T20:25:37.927682135Z" level=info msg="CreateContainer within sandbox \"1b5009f9502d0731dc9cff40e5480f83202d9e46e451705b7c466c7f0c166731\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 12 20:25:37.947925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3866759235.mount: Deactivated successfully. Feb 12 20:25:37.961449 env[1799]: time="2024-02-12T20:25:37.961385654Z" level=info msg="CreateContainer within sandbox \"1b5009f9502d0731dc9cff40e5480f83202d9e46e451705b7c466c7f0c166731\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"0f4f83f549c5a0cae123594c04889880de8fd39a33b3ebc2f0817942c99b1454\"" Feb 12 20:25:37.962672 env[1799]: time="2024-02-12T20:25:37.962623684Z" level=info msg="StartContainer for \"0f4f83f549c5a0cae123594c04889880de8fd39a33b3ebc2f0817942c99b1454\"" Feb 12 20:25:38.076222 env[1799]: time="2024-02-12T20:25:38.076129892Z" level=info msg="StartContainer for \"0f4f83f549c5a0cae123594c04889880de8fd39a33b3ebc2f0817942c99b1454\" returns successfully" Feb 12 20:25:38.269887 kubelet[2315]: I0212 20:25:38.269833 2315 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=-9.223372027584997e+09 pod.CreationTimestamp="2024-02-12 20:25:29 +0000 UTC" firstStartedPulling="2024-02-12 20:25:30.589574554 +0000 UTC m=+70.898681495" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:25:38.267425208 +0000 UTC m=+78.576532149" watchObservedRunningTime="2024-02-12 20:25:38.269777883 +0000 UTC m=+78.578884836" Feb 12 20:25:38.375000 audit[4196]: NETFILTER_CFG table=filter:95 family=2 entries=20 op=nft_register_rule pid=4196 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:25:38.375000 audit[4196]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=fffffb57e6d0 a2=0 a3=ffffae5ce6c0 items=0 ppid=2591 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:38.395210 kernel: audit: type=1325 audit(1707769538.375:269): table=filter:95 family=2 entries=20 op=nft_register_rule pid=4196 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:25:38.395395 kernel: audit: type=1300 audit(1707769538.375:269): arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=fffffb57e6d0 a2=0 a3=ffffae5ce6c0 items=0 ppid=2591 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:38.375000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:25:38.401603 kernel: audit: type=1327 audit(1707769538.375:269): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:25:38.382000 audit[4196]: NETFILTER_CFG table=nat:96 family=2 entries=162 op=nft_register_chain pid=4196 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:25:38.407947 kernel: audit: type=1325 audit(1707769538.382:270): table=nat:96 family=2 entries=162 op=nft_register_chain pid=4196 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:25:38.382000 audit[4196]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=66940 a0=3 a1=fffffb57e6d0 a2=0 a3=ffffae5ce6c0 items=0 ppid=2591 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:38.420390 kernel: audit: type=1300 audit(1707769538.382:270): arch=c00000b7 syscall=211 success=yes exit=66940 a0=3 a1=fffffb57e6d0 a2=0 a3=ffffae5ce6c0 items=0 ppid=2591 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:38.382000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:25:38.426366 kernel: audit: type=1327 audit(1707769538.382:270): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:25:38.711859 kubelet[2315]: E0212 20:25:38.711274 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:39.368652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3342160087.mount: Deactivated successfully. Feb 12 20:25:39.713130 kubelet[2315]: E0212 20:25:39.712951 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:39.722000 audit[4223]: NETFILTER_CFG table=filter:97 family=2 entries=8 op=nft_register_rule pid=4223 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:25:39.722000 audit[4223]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=ffffdb8c0890 a2=0 a3=ffff9bfe96c0 items=0 ppid=2591 pid=4223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:39.722000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:25:39.733304 kernel: audit: type=1325 audit(1707769539.722:271): table=filter:97 family=2 entries=8 op=nft_register_rule pid=4223 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:25:39.731000 audit[4223]: NETFILTER_CFG table=nat:98 family=2 entries=198 op=nft_register_rule pid=4223 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:25:39.731000 audit[4223]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=66940 a0=3 a1=ffffdb8c0890 a2=0 a3=ffff9bfe96c0 items=0 ppid=2591 pid=4223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:39.731000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:25:40.643504 kubelet[2315]: E0212 20:25:40.643436 2315 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:40.714235 kubelet[2315]: E0212 20:25:40.714161 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:41.186477 env[1799]: time="2024-02-12T20:25:41.186408419Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:41.190102 env[1799]: time="2024-02-12T20:25:41.190036238Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:24494ef6c7de0e2dcf21ad9fb6c94801c53f120443e256a5e1b54eccd57058a9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:41.197214 env[1799]: time="2024-02-12T20:25:41.197153590Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:41.205137 env[1799]: time="2024-02-12T20:25:41.205063949Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:5ff0bdc8d0b2e9d7819703b18867f60f9153ed01da81e2bbfa22002abec9dc26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:41.206542 env[1799]: time="2024-02-12T20:25:41.206469283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\" returns image reference \"sha256:24494ef6c7de0e2dcf21ad9fb6c94801c53f120443e256a5e1b54eccd57058a9\"" Feb 12 20:25:41.209813 env[1799]: time="2024-02-12T20:25:41.209744674Z" level=info msg="CreateContainer within sandbox \"53be16816d0d370bf84fce31b862329cd471b115a9d372b3d28b8385990a32d9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 12 20:25:41.232510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount997776009.mount: Deactivated successfully. Feb 12 20:25:41.240621 env[1799]: time="2024-02-12T20:25:41.240560777Z" level=info msg="CreateContainer within sandbox \"53be16816d0d370bf84fce31b862329cd471b115a9d372b3d28b8385990a32d9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c41bcf958700f1bef156576bdba6618fb2c14c0a3a6cc37e59edbf5cc51e9979\"" Feb 12 20:25:41.241995 env[1799]: time="2024-02-12T20:25:41.241930110Z" level=info msg="StartContainer for \"c41bcf958700f1bef156576bdba6618fb2c14c0a3a6cc37e59edbf5cc51e9979\"" Feb 12 20:25:41.374243 env[1799]: time="2024-02-12T20:25:41.374175162Z" level=info msg="StartContainer for \"c41bcf958700f1bef156576bdba6618fb2c14c0a3a6cc37e59edbf5cc51e9979\" returns successfully" Feb 12 20:25:41.715156 kubelet[2315]: E0212 20:25:41.715071 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:42.283896 kubelet[2315]: I0212 20:25:42.283834 2315 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7d5f545cdf-hschc" podStartSLOduration=-9.223372028570995e+09 pod.CreationTimestamp="2024-02-12 20:25:34 +0000 UTC" firstStartedPulling="2024-02-12 20:25:36.011391666 +0000 UTC m=+76.320498595" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:25:42.28061957 +0000 UTC m=+82.589726511" watchObservedRunningTime="2024-02-12 20:25:42.283779905 +0000 UTC m=+82.592886870" Feb 12 20:25:42.385000 audit[4289]: NETFILTER_CFG table=filter:99 family=2 entries=8 op=nft_register_rule pid=4289 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:25:42.389036 kernel: kauditd_printk_skb: 5 callbacks suppressed Feb 12 20:25:42.389163 kernel: audit: type=1325 audit(1707769542.385:273): table=filter:99 family=2 entries=8 op=nft_register_rule pid=4289 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:25:42.385000 audit[4289]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=fffffbcd4760 a2=0 a3=ffff922046c0 items=0 ppid=2591 pid=4289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:42.385000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:25:42.420822 kernel: audit: type=1300 audit(1707769542.385:273): arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=fffffbcd4760 a2=0 a3=ffff922046c0 items=0 ppid=2591 pid=4289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:42.421008 kernel: audit: type=1327 audit(1707769542.385:273): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:25:42.394000 audit[4289]: NETFILTER_CFG table=nat:100 family=2 entries=198 op=nft_register_rule pid=4289 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:25:42.428289 kernel: audit: type=1325 audit(1707769542.394:274): table=nat:100 family=2 entries=198 op=nft_register_rule pid=4289 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:25:42.394000 audit[4289]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=66940 a0=3 a1=fffffbcd4760 a2=0 a3=ffff922046c0 items=0 ppid=2591 pid=4289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:42.440889 kernel: audit: type=1300 audit(1707769542.394:274): arch=c00000b7 syscall=211 success=yes exit=66940 a0=3 a1=fffffbcd4760 a2=0 a3=ffff922046c0 items=0 ppid=2591 pid=4289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:42.394000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:25:42.447355 kernel: audit: type=1327 audit(1707769542.394:274): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:25:42.716060 kubelet[2315]: E0212 20:25:42.716003 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:43.717894 kubelet[2315]: E0212 20:25:43.717828 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:44.718909 kubelet[2315]: E0212 20:25:44.718803 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:45.719868 kubelet[2315]: E0212 20:25:45.719828 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:46.721326 kubelet[2315]: E0212 20:25:46.721284 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:47.722706 kubelet[2315]: E0212 20:25:47.722663 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:48.724175 kubelet[2315]: E0212 20:25:48.724065 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:49.724643 kubelet[2315]: E0212 20:25:49.724584 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:50.725479 kubelet[2315]: E0212 20:25:50.725392 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:51.726645 kubelet[2315]: E0212 20:25:51.726595 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:52.727813 kubelet[2315]: E0212 20:25:52.727713 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:53.728066 kubelet[2315]: E0212 20:25:53.727996 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:54.728934 kubelet[2315]: E0212 20:25:54.728894 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:55.730318 kubelet[2315]: E0212 20:25:55.730253 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:56.730895 kubelet[2315]: E0212 20:25:56.730830 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:57.731327 kubelet[2315]: E0212 20:25:57.731171 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:58.731845 kubelet[2315]: E0212 20:25:58.731778 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:25:59.732853 kubelet[2315]: E0212 20:25:59.732789 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:00.644200 kubelet[2315]: E0212 20:26:00.644135 2315 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:00.733181 kubelet[2315]: E0212 20:26:00.733114 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:01.733633 kubelet[2315]: E0212 20:26:01.733564 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:02.644385 kubelet[2315]: I0212 20:26:02.644339 2315 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:26:02.734111 kubelet[2315]: E0212 20:26:02.734069 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:02.767078 kubelet[2315]: I0212 20:26:02.767042 2315 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvkjk\" (UniqueName: \"kubernetes.io/projected/3d3b5553-97f4-4f06-baa3-aa60ef7b1901-kube-api-access-jvkjk\") pod \"test-pod-1\" (UID: \"3d3b5553-97f4-4f06-baa3-aa60ef7b1901\") " pod="default/test-pod-1" Feb 12 20:26:02.767338 kubelet[2315]: I0212 20:26:02.767314 2315 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b9418b1d-0198-4066-841a-b2100237b3c8\" (UniqueName: \"kubernetes.io/nfs/3d3b5553-97f4-4f06-baa3-aa60ef7b1901-pvc-b9418b1d-0198-4066-841a-b2100237b3c8\") pod \"test-pod-1\" (UID: \"3d3b5553-97f4-4f06-baa3-aa60ef7b1901\") " pod="default/test-pod-1" Feb 12 20:26:02.886000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:02.902078 kernel: audit: type=1400 audit(1707769562.886:275): avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:02.902219 kernel: Failed to create system directory netfs Feb 12 20:26:02.902263 kernel: Failed to create system directory netfs Feb 12 20:26:02.886000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:02.911124 kernel: audit: type=1400 audit(1707769562.886:275): avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:02.911242 kernel: Failed to create system directory netfs Feb 12 20:26:02.886000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:02.920979 kernel: audit: type=1400 audit(1707769562.886:275): avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:02.921056 kernel: Failed to create system directory netfs Feb 12 20:26:02.886000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:02.932330 kernel: audit: type=1400 audit(1707769562.886:275): avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:02.886000 audit[4342]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaab000645e0 a1=12c14 a2=aaaad46ae028 a3=aaab00055010 items=0 ppid=1774 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:26:02.944444 kernel: audit: type=1300 audit(1707769562.886:275): arch=c00000b7 syscall=105 success=yes exit=0 a0=aaab000645e0 a1=12c14 a2=aaaad46ae028 a3=aaab00055010 items=0 ppid=1774 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:26:02.886000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 12 20:26:02.948880 kernel: audit: type=1327 audit(1707769562.886:275): proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 12 20:26:02.931000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:02.950013 kernel: Failed to create system directory fscache Feb 12 20:26:02.960134 kernel: audit: type=1400 audit(1707769562.931:276): avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:02.960243 kernel: Failed to create system directory fscache Feb 12 20:26:02.931000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:02.972421 kernel: Failed to create system directory fscache Feb 12 20:26:02.972535 kernel: audit: type=1400 audit(1707769562.931:276): avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:02.972608 kernel: Failed to create system directory fscache Feb 12 20:26:02.931000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:02.982706 kernel: audit: type=1400 audit(1707769562.931:276): avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:02.982816 kernel: Failed to create system directory fscache Feb 12 20:26:02.931000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:02.992985 kernel: audit: type=1400 audit(1707769562.931:276): avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:02.993089 kernel: Failed to create system directory fscache Feb 12 20:26:02.931000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:02.931000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:02.931000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:02.997424 kernel: Failed to create system directory fscache Feb 12 20:26:02.931000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:02.999621 kernel: Failed to create system directory fscache Feb 12 20:26:02.999719 kernel: Failed to create system directory fscache Feb 12 20:26:02.931000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:02.931000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.003609 kernel: Failed to create system directory fscache Feb 12 20:26:03.003694 kernel: Failed to create system directory fscache Feb 12 20:26:02.931000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:02.931000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.007672 kernel: Failed to create system directory fscache Feb 12 20:26:03.007735 kernel: Failed to create system directory fscache Feb 12 20:26:02.931000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:02.931000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.011652 kernel: Failed to create system directory fscache Feb 12 20:26:03.014126 kernel: FS-Cache: Loaded Feb 12 20:26:02.931000 audit[4342]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaab00277210 a1=4c344 a2=aaaad46ae028 a3=aaab00055010 items=0 ppid=1774 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:26:02.931000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.048011 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.048118 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.048159 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.051907 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.052007 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.055835 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.055896 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.059749 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.059819 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.063680 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.063739 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.067541 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.067600 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.071454 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.071541 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.075341 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.075412 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.079206 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.079275 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.083132 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.083186 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.086952 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.087030 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.090817 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.090920 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.094770 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.094824 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.098600 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.098706 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.102461 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.104434 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.104517 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.108220 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.108303 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.112016 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.112093 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.115869 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.115945 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.119690 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.119768 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.123604 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.123682 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.127424 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.127474 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.131245 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.133143 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.135052 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.135136 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.138787 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.138884 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.142641 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.142734 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.146430 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.146513 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.150194 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.150283 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.157482 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.157582 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.157624 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.158028 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.161816 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.161931 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.165790 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.165879 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.169591 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.169666 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.173492 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.173589 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.177538 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.177634 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.179527 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.183383 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.183472 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.187134 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.187186 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.190868 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.191032 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.194827 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.194916 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.199085 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.201277 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.203446 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.203541 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.205327 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.209086 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.209162 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.212942 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.213037 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.216802 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.216941 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.220856 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.221134 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.224982 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.225037 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.228841 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.228896 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.232677 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.232732 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.236544 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.236748 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.240620 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.240737 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.244686 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.244740 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.248508 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.248610 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.252351 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.252437 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.256077 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.256130 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.259920 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.260027 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.263795 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.263847 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.267585 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.267682 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.271431 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.271522 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.275399 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.275585 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.279011 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.279072 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.282936 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.283054 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.286762 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.286869 kernel: Failed to create system directory sunrpc Feb 12 20:26:03.035000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.299821 kernel: RPC: Registered named UNIX socket transport module. Feb 12 20:26:03.299953 kernel: RPC: Registered udp transport module. Feb 12 20:26:03.300065 kernel: RPC: Registered tcp transport module. Feb 12 20:26:03.304060 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 12 20:26:03.035000 audit[4342]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaab002c3560 a1=fbb6c a2=aaaad46ae028 a3=aaab00055010 items=6 ppid=1774 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:26:03.035000 audit: CWD cwd="/" Feb 12 20:26:03.035000 audit: PATH item=0 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:26:03.035000 audit: PATH item=1 name=(null) inode=20416 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:26:03.035000 audit: PATH item=2 name=(null) inode=20416 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:26:03.035000 audit: PATH item=3 name=(null) inode=20417 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:26:03.035000 audit: PATH item=4 name=(null) inode=20416 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:26:03.035000 audit: PATH item=5 name=(null) inode=20418 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:26:03.035000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.342238 kernel: Failed to create system directory nfs Feb 12 20:26:03.342305 kernel: Failed to create system directory nfs Feb 12 20:26:03.342346 kernel: Failed to create system directory nfs Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.345825 kernel: Failed to create system directory nfs Feb 12 20:26:03.345880 kernel: Failed to create system directory nfs Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.349446 kernel: Failed to create system directory nfs Feb 12 20:26:03.349578 kernel: Failed to create system directory nfs Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.353080 kernel: Failed to create system directory nfs Feb 12 20:26:03.353164 kernel: Failed to create system directory nfs Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.356671 kernel: Failed to create system directory nfs Feb 12 20:26:03.356757 kernel: Failed to create system directory nfs Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.358423 kernel: Failed to create system directory nfs Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.360297 kernel: Failed to create system directory nfs Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.363880 kernel: Failed to create system directory nfs Feb 12 20:26:03.363933 kernel: Failed to create system directory nfs Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.367465 kernel: Failed to create system directory nfs Feb 12 20:26:03.367515 kernel: Failed to create system directory nfs Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.371080 kernel: Failed to create system directory nfs Feb 12 20:26:03.371183 kernel: Failed to create system directory nfs Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.374653 kernel: Failed to create system directory nfs Feb 12 20:26:03.374709 kernel: Failed to create system directory nfs Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.378241 kernel: Failed to create system directory nfs Feb 12 20:26:03.378296 kernel: Failed to create system directory nfs Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.381929 kernel: Failed to create system directory nfs Feb 12 20:26:03.382039 kernel: Failed to create system directory nfs Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.385751 kernel: Failed to create system directory nfs Feb 12 20:26:03.385804 kernel: Failed to create system directory nfs Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.389335 kernel: Failed to create system directory nfs Feb 12 20:26:03.389419 kernel: Failed to create system directory nfs Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.392835 kernel: Failed to create system directory nfs Feb 12 20:26:03.392891 kernel: Failed to create system directory nfs Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.396400 kernel: Failed to create system directory nfs Feb 12 20:26:03.396449 kernel: Failed to create system directory nfs Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.399894 kernel: Failed to create system directory nfs Feb 12 20:26:03.400007 kernel: Failed to create system directory nfs Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.403502 kernel: Failed to create system directory nfs Feb 12 20:26:03.403555 kernel: Failed to create system directory nfs Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.407062 kernel: Failed to create system directory nfs Feb 12 20:26:03.407152 kernel: Failed to create system directory nfs Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.414537 kernel: Failed to create system directory nfs Feb 12 20:26:03.414654 kernel: Failed to create system directory nfs Feb 12 20:26:03.414698 kernel: Failed to create system directory nfs Feb 12 20:26:03.414738 kernel: Failed to create system directory nfs Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.416494 kernel: Failed to create system directory nfs Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.420122 kernel: Failed to create system directory nfs Feb 12 20:26:03.420214 kernel: Failed to create system directory nfs Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.423654 kernel: Failed to create system directory nfs Feb 12 20:26:03.423705 kernel: Failed to create system directory nfs Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.427237 kernel: Failed to create system directory nfs Feb 12 20:26:03.427335 kernel: Failed to create system directory nfs Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.430798 kernel: Failed to create system directory nfs Feb 12 20:26:03.430938 kernel: Failed to create system directory nfs Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.434691 kernel: Failed to create system directory nfs Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.436746 kernel: Failed to create system directory nfs Feb 12 20:26:03.436930 kernel: Failed to create system directory nfs Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.440792 kernel: Failed to create system directory nfs Feb 12 20:26:03.440886 kernel: Failed to create system directory nfs Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.442687 kernel: Failed to create system directory nfs Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.332000 audit[4342]: AVC avc: denied { confidentiality } for pid=4342 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.446280 kernel: Failed to create system directory nfs Feb 12 20:26:03.467104 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 12 20:26:03.332000 audit[4342]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaab003efb40 a1=ae35c a2=aaaad46ae028 a3=aaab00055010 items=0 ppid=1774 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:26:03.332000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.512514 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.512609 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.512655 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.516078 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.516280 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.519688 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.519776 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.523339 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.523438 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.526989 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.527041 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.532462 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.532615 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.532851 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.534719 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.538367 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.538464 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.541921 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.542022 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.545593 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.545684 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.547325 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.549187 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.551085 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.554833 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.554991 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.558497 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.558549 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.562128 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.562215 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.565822 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.565930 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.569498 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.569581 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.573132 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.573226 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.576907 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.577040 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.580407 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.580492 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.583945 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.584028 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.585744 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.587633 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.591326 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.591429 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.593014 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.594821 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.598491 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.598571 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.602127 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.602209 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.603832 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.605750 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.609430 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.609531 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.611233 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.614858 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.614937 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.618598 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.618687 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.622229 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.622314 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.625833 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.626077 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.629689 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.629775 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.633300 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.633381 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.635073 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.638685 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.638771 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.642293 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.642379 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.644094 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.647792 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.647887 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.651713 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.651816 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.655414 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.655495 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.659064 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.659145 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.662785 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.662871 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.671842 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.672035 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.672116 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.672187 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.674471 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.674591 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.676523 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.680282 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.680370 kernel: Failed to create system directory nfs4 Feb 12 20:26:03.500000 audit[4348]: AVC avc: denied { confidentiality } for pid=4348 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.734635 kubelet[2315]: E0212 20:26:03.734549 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:03.897739 kernel: NFS: Registering the id_resolver key type Feb 12 20:26:03.897984 kernel: Key type id_resolver registered Feb 12 20:26:03.898058 kernel: Key type id_legacy registered Feb 12 20:26:03.500000 audit[4348]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=ffff8453b010 a1=167c04 a2=aaaab21de028 a3=aaaac3678010 items=0 ppid=1774 pid=4348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:26:03.500000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D006E66737634 Feb 12 20:26:03.908000 audit[4349]: AVC avc: denied { confidentiality } for pid=4349 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.915839 kernel: Failed to create system directory rpcgss Feb 12 20:26:03.915917 kernel: Failed to create system directory rpcgss Feb 12 20:26:03.908000 audit[4349]: AVC avc: denied { confidentiality } for pid=4349 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.918057 kernel: Failed to create system directory rpcgss Feb 12 20:26:03.908000 audit[4349]: AVC avc: denied { confidentiality } for pid=4349 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.920119 kernel: Failed to create system directory rpcgss Feb 12 20:26:03.908000 audit[4349]: AVC avc: denied { confidentiality } for pid=4349 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.922218 kernel: Failed to create system directory rpcgss Feb 12 20:26:03.908000 audit[4349]: AVC avc: denied { confidentiality } for pid=4349 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.924454 kernel: Failed to create system directory rpcgss Feb 12 20:26:03.908000 audit[4349]: AVC avc: denied { confidentiality } for pid=4349 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.926635 kernel: Failed to create system directory rpcgss Feb 12 20:26:03.908000 audit[4349]: AVC avc: denied { confidentiality } for pid=4349 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.928808 kernel: Failed to create system directory rpcgss Feb 12 20:26:03.908000 audit[4349]: AVC avc: denied { confidentiality } for pid=4349 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.908000 audit[4349]: AVC avc: denied { confidentiality } for pid=4349 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.933017 kernel: Failed to create system directory rpcgss Feb 12 20:26:03.933076 kernel: Failed to create system directory rpcgss Feb 12 20:26:03.908000 audit[4349]: AVC avc: denied { confidentiality } for pid=4349 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.935138 kernel: Failed to create system directory rpcgss Feb 12 20:26:03.908000 audit[4349]: AVC avc: denied { confidentiality } for pid=4349 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.937171 kernel: Failed to create system directory rpcgss Feb 12 20:26:03.908000 audit[4349]: AVC avc: denied { confidentiality } for pid=4349 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.939224 kernel: Failed to create system directory rpcgss Feb 12 20:26:03.908000 audit[4349]: AVC avc: denied { confidentiality } for pid=4349 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.941341 kernel: Failed to create system directory rpcgss Feb 12 20:26:03.908000 audit[4349]: AVC avc: denied { confidentiality } for pid=4349 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.943471 kernel: Failed to create system directory rpcgss Feb 12 20:26:03.908000 audit[4349]: AVC avc: denied { confidentiality } for pid=4349 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.945733 kernel: Failed to create system directory rpcgss Feb 12 20:26:03.908000 audit[4349]: AVC avc: denied { confidentiality } for pid=4349 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.947880 kernel: Failed to create system directory rpcgss Feb 12 20:26:03.908000 audit[4349]: AVC avc: denied { confidentiality } for pid=4349 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.950061 kernel: Failed to create system directory rpcgss Feb 12 20:26:03.908000 audit[4349]: AVC avc: denied { confidentiality } for pid=4349 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.952186 kernel: Failed to create system directory rpcgss Feb 12 20:26:03.908000 audit[4349]: AVC avc: denied { confidentiality } for pid=4349 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.954315 kernel: Failed to create system directory rpcgss Feb 12 20:26:03.908000 audit[4349]: AVC avc: denied { confidentiality } for pid=4349 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.956410 kernel: Failed to create system directory rpcgss Feb 12 20:26:03.908000 audit[4349]: AVC avc: denied { confidentiality } for pid=4349 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.958487 kernel: Failed to create system directory rpcgss Feb 12 20:26:03.908000 audit[4349]: AVC avc: denied { confidentiality } for pid=4349 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.960610 kernel: Failed to create system directory rpcgss Feb 12 20:26:03.908000 audit[4349]: AVC avc: denied { confidentiality } for pid=4349 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.962696 kernel: Failed to create system directory rpcgss Feb 12 20:26:03.908000 audit[4349]: AVC avc: denied { confidentiality } for pid=4349 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.964828 kernel: Failed to create system directory rpcgss Feb 12 20:26:03.908000 audit[4349]: AVC avc: denied { confidentiality } for pid=4349 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.908000 audit[4349]: AVC avc: denied { confidentiality } for pid=4349 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 20:26:03.967983 kernel: Failed to create system directory rpcgss Feb 12 20:26:03.908000 audit[4349]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=ffffbc823010 a1=3e09c a2=aaaabdb8e028 a3=aaaad4586010 items=0 ppid=1774 pid=4349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:26:03.908000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D007270632D617574682D36 Feb 12 20:26:03.994542 nfsidmap[4359]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Feb 12 20:26:04.000129 nfsidmap[4361]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Feb 12 20:26:04.014000 audit[1]: AVC avc: denied { watch_reads } for pid=1 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2677 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 12 20:26:04.014000 audit[1]: AVC avc: denied { watch_reads } for pid=1 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2677 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 12 20:26:04.015000 audit[2018]: AVC avc: denied { watch_reads } for pid=2018 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2677 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 12 20:26:04.015000 audit[2018]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=d a1=aaab1bc33330 a2=10 a3=0 items=0 ppid=1 pid=2018 auid=4294967295 uid=500 gid=500 euid=500 suid=500 fsuid=500 egid=500 sgid=500 fsgid=500 tty=(none) ses=4294967295 comm="systemd" exe="/usr/lib/systemd/systemd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:26:04.015000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D64002D2D75736572 Feb 12 20:26:04.015000 audit[2018]: AVC avc: denied { watch_reads } for pid=2018 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2677 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 12 20:26:04.015000 audit[2018]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=d a1=aaab1bc33330 a2=10 a3=0 items=0 ppid=1 pid=2018 auid=4294967295 uid=500 gid=500 euid=500 suid=500 fsuid=500 egid=500 sgid=500 fsgid=500 tty=(none) ses=4294967295 comm="systemd" exe="/usr/lib/systemd/systemd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:26:04.015000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D64002D2D75736572 Feb 12 20:26:04.015000 audit[2018]: AVC avc: denied { watch_reads } for pid=2018 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2677 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 12 20:26:04.015000 audit[2018]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=d a1=aaab1bc33330 a2=10 a3=0 items=0 ppid=1 pid=2018 auid=4294967295 uid=500 gid=500 euid=500 suid=500 fsuid=500 egid=500 sgid=500 fsgid=500 tty=(none) ses=4294967295 comm="systemd" exe="/usr/lib/systemd/systemd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:26:04.015000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D64002D2D75736572 Feb 12 20:26:04.017000 audit[1]: AVC avc: denied { watch_reads } for pid=1 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2677 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 12 20:26:04.151941 env[1799]: time="2024-02-12T20:26:04.151445851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:3d3b5553-97f4-4f06-baa3-aa60ef7b1901,Namespace:default,Attempt:0,}" Feb 12 20:26:04.355753 (udev-worker)[4358]: Network interface NamePolicy= disabled on kernel command line. Feb 12 20:26:04.359568 systemd-networkd[1586]: cali5ec59c6bf6e: Link UP Feb 12 20:26:04.369084 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 20:26:04.369198 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali5ec59c6bf6e: link becomes ready Feb 12 20:26:04.368475 systemd-networkd[1586]: cali5ec59c6bf6e: Gained carrier Feb 12 20:26:04.389237 env[1799]: 2024-02-12 20:26:04.235 [INFO][4362] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.16.103-k8s-test--pod--1-eth0 default 3d3b5553-97f4-4f06-baa3-aa60ef7b1901 1210 0 2024-02-12 20:25:30 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.16.103 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="47b6480b6ad90d7db1fff5f9ca794752467036e034eff6baace7f695a4af258b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.16.103-k8s-test--pod--1-" Feb 12 20:26:04.389237 env[1799]: 2024-02-12 20:26:04.236 [INFO][4362] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="47b6480b6ad90d7db1fff5f9ca794752467036e034eff6baace7f695a4af258b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.16.103-k8s-test--pod--1-eth0" Feb 12 20:26:04.389237 env[1799]: 2024-02-12 20:26:04.291 [INFO][4373] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="47b6480b6ad90d7db1fff5f9ca794752467036e034eff6baace7f695a4af258b" HandleID="k8s-pod-network.47b6480b6ad90d7db1fff5f9ca794752467036e034eff6baace7f695a4af258b" Workload="172.31.16.103-k8s-test--pod--1-eth0" Feb 12 20:26:04.389237 env[1799]: 2024-02-12 20:26:04.309 [INFO][4373] ipam_plugin.go 268: Auto assigning IP ContainerID="47b6480b6ad90d7db1fff5f9ca794752467036e034eff6baace7f695a4af258b" HandleID="k8s-pod-network.47b6480b6ad90d7db1fff5f9ca794752467036e034eff6baace7f695a4af258b" Workload="172.31.16.103-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000291240), Attrs:map[string]string{"namespace":"default", "node":"172.31.16.103", "pod":"test-pod-1", "timestamp":"2024-02-12 20:26:04.291132296 +0000 UTC"}, Hostname:"172.31.16.103", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 12 20:26:04.389237 env[1799]: 2024-02-12 20:26:04.310 [INFO][4373] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 20:26:04.389237 env[1799]: 2024-02-12 20:26:04.310 [INFO][4373] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 20:26:04.389237 env[1799]: 2024-02-12 20:26:04.310 [INFO][4373] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.16.103' Feb 12 20:26:04.389237 env[1799]: 2024-02-12 20:26:04.313 [INFO][4373] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.47b6480b6ad90d7db1fff5f9ca794752467036e034eff6baace7f695a4af258b" host="172.31.16.103" Feb 12 20:26:04.389237 env[1799]: 2024-02-12 20:26:04.318 [INFO][4373] ipam.go 372: Looking up existing affinities for host host="172.31.16.103" Feb 12 20:26:04.389237 env[1799]: 2024-02-12 20:26:04.325 [INFO][4373] ipam.go 489: Trying affinity for 192.168.78.192/26 host="172.31.16.103" Feb 12 20:26:04.389237 env[1799]: 2024-02-12 20:26:04.327 [INFO][4373] ipam.go 155: Attempting to load block cidr=192.168.78.192/26 host="172.31.16.103" Feb 12 20:26:04.389237 env[1799]: 2024-02-12 20:26:04.332 [INFO][4373] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.78.192/26 host="172.31.16.103" Feb 12 20:26:04.389237 env[1799]: 2024-02-12 20:26:04.332 [INFO][4373] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.78.192/26 handle="k8s-pod-network.47b6480b6ad90d7db1fff5f9ca794752467036e034eff6baace7f695a4af258b" host="172.31.16.103" Feb 12 20:26:04.389237 env[1799]: 2024-02-12 20:26:04.335 [INFO][4373] ipam.go 1682: Creating new handle: k8s-pod-network.47b6480b6ad90d7db1fff5f9ca794752467036e034eff6baace7f695a4af258b Feb 12 20:26:04.389237 env[1799]: 2024-02-12 20:26:04.341 [INFO][4373] ipam.go 1203: Writing block in order to claim IPs block=192.168.78.192/26 handle="k8s-pod-network.47b6480b6ad90d7db1fff5f9ca794752467036e034eff6baace7f695a4af258b" host="172.31.16.103" Feb 12 20:26:04.389237 env[1799]: 2024-02-12 20:26:04.349 [INFO][4373] ipam.go 1216: Successfully claimed IPs: [192.168.78.197/26] block=192.168.78.192/26 handle="k8s-pod-network.47b6480b6ad90d7db1fff5f9ca794752467036e034eff6baace7f695a4af258b" host="172.31.16.103" Feb 12 20:26:04.389237 env[1799]: 2024-02-12 20:26:04.349 [INFO][4373] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.78.197/26] handle="k8s-pod-network.47b6480b6ad90d7db1fff5f9ca794752467036e034eff6baace7f695a4af258b" host="172.31.16.103" Feb 12 20:26:04.389237 env[1799]: 2024-02-12 20:26:04.349 [INFO][4373] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 20:26:04.389237 env[1799]: 2024-02-12 20:26:04.350 [INFO][4373] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.78.197/26] IPv6=[] ContainerID="47b6480b6ad90d7db1fff5f9ca794752467036e034eff6baace7f695a4af258b" HandleID="k8s-pod-network.47b6480b6ad90d7db1fff5f9ca794752467036e034eff6baace7f695a4af258b" Workload="172.31.16.103-k8s-test--pod--1-eth0" Feb 12 20:26:04.389237 env[1799]: 2024-02-12 20:26:04.353 [INFO][4362] k8s.go 385: Populated endpoint ContainerID="47b6480b6ad90d7db1fff5f9ca794752467036e034eff6baace7f695a4af258b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.16.103-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.103-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"3d3b5553-97f4-4f06-baa3-aa60ef7b1901", ResourceVersion:"1210", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 20, 25, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.16.103", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.78.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 20:26:04.389237 env[1799]: 2024-02-12 20:26:04.353 [INFO][4362] k8s.go 386: Calico CNI using IPs: [192.168.78.197/32] ContainerID="47b6480b6ad90d7db1fff5f9ca794752467036e034eff6baace7f695a4af258b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.16.103-k8s-test--pod--1-eth0" Feb 12 20:26:04.390691 env[1799]: 2024-02-12 20:26:04.353 [INFO][4362] dataplane_linux.go 68: Setting the host side veth name to cali5ec59c6bf6e ContainerID="47b6480b6ad90d7db1fff5f9ca794752467036e034eff6baace7f695a4af258b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.16.103-k8s-test--pod--1-eth0" Feb 12 20:26:04.390691 env[1799]: 2024-02-12 20:26:04.370 [INFO][4362] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="47b6480b6ad90d7db1fff5f9ca794752467036e034eff6baace7f695a4af258b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.16.103-k8s-test--pod--1-eth0" Feb 12 20:26:04.390691 env[1799]: 2024-02-12 20:26:04.372 [INFO][4362] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="47b6480b6ad90d7db1fff5f9ca794752467036e034eff6baace7f695a4af258b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.16.103-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.103-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"3d3b5553-97f4-4f06-baa3-aa60ef7b1901", ResourceVersion:"1210", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 20, 25, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.16.103", ContainerID:"47b6480b6ad90d7db1fff5f9ca794752467036e034eff6baace7f695a4af258b", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.78.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"5a:f4:80:74:c5:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 20:26:04.390691 env[1799]: 2024-02-12 20:26:04.385 [INFO][4362] k8s.go 491: Wrote updated endpoint to datastore ContainerID="47b6480b6ad90d7db1fff5f9ca794752467036e034eff6baace7f695a4af258b" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.16.103-k8s-test--pod--1-eth0" Feb 12 20:26:04.429068 env[1799]: time="2024-02-12T20:26:04.420649504Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:26:04.429068 env[1799]: time="2024-02-12T20:26:04.420731236Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:26:04.429068 env[1799]: time="2024-02-12T20:26:04.420758884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:26:04.429068 env[1799]: time="2024-02-12T20:26:04.421041053Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/47b6480b6ad90d7db1fff5f9ca794752467036e034eff6baace7f695a4af258b pid=4401 runtime=io.containerd.runc.v2 Feb 12 20:26:04.436000 audit[4403]: NETFILTER_CFG table=filter:101 family=2 entries=42 op=nft_register_chain pid=4403 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 20:26:04.436000 audit[4403]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20268 a0=3 a1=ffffeabbe510 a2=0 a3=ffffa5251fa8 items=0 ppid=3177 pid=4403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:26:04.436000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 20:26:04.517725 env[1799]: time="2024-02-12T20:26:04.517672432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:3d3b5553-97f4-4f06-baa3-aa60ef7b1901,Namespace:default,Attempt:0,} returns sandbox id \"47b6480b6ad90d7db1fff5f9ca794752467036e034eff6baace7f695a4af258b\"" Feb 12 20:26:04.521355 env[1799]: time="2024-02-12T20:26:04.521235618Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 12 20:26:04.734711 kubelet[2315]: E0212 20:26:04.734657 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:04.945505 env[1799]: time="2024-02-12T20:26:04.945429232Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:04.948412 env[1799]: time="2024-02-12T20:26:04.948351387Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:04.951533 env[1799]: time="2024-02-12T20:26:04.951469899Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:04.956986 env[1799]: time="2024-02-12T20:26:04.956908297Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:26:04.959906 env[1799]: time="2024-02-12T20:26:04.958363555Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b\"" Feb 12 20:26:04.963184 env[1799]: time="2024-02-12T20:26:04.963113930Z" level=info msg="CreateContainer within sandbox \"47b6480b6ad90d7db1fff5f9ca794752467036e034eff6baace7f695a4af258b\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 12 20:26:04.984811 env[1799]: time="2024-02-12T20:26:04.984673695Z" level=info msg="CreateContainer within sandbox \"47b6480b6ad90d7db1fff5f9ca794752467036e034eff6baace7f695a4af258b\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"545f7fcd045e4022d3b5691ef6738851247e7aea120bc1cd79ffe214bca17a36\"" Feb 12 20:26:04.986571 env[1799]: time="2024-02-12T20:26:04.986514778Z" level=info msg="StartContainer for \"545f7fcd045e4022d3b5691ef6738851247e7aea120bc1cd79ffe214bca17a36\"" Feb 12 20:26:05.094606 env[1799]: time="2024-02-12T20:26:05.094535260Z" level=info msg="StartContainer for \"545f7fcd045e4022d3b5691ef6738851247e7aea120bc1cd79ffe214bca17a36\" returns successfully" Feb 12 20:26:05.376412 kubelet[2315]: I0212 20:26:05.375694 2315 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=-9.223372001479137e+09 pod.CreationTimestamp="2024-02-12 20:25:30 +0000 UTC" firstStartedPulling="2024-02-12 20:26:04.520317218 +0000 UTC m=+104.829424147" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:26:05.343541043 +0000 UTC m=+105.652647996" watchObservedRunningTime="2024-02-12 20:26:05.375637843 +0000 UTC m=+105.684744808" Feb 12 20:26:05.467000 audit[4535]: NETFILTER_CFG table=filter:102 family=2 entries=7 op=nft_register_rule pid=4535 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:26:05.467000 audit[4535]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=fffff5f4c8a0 a2=0 a3=ffff87c1d6c0 items=0 ppid=2591 pid=4535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:26:05.467000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:26:05.476000 audit[4535]: NETFILTER_CFG table=nat:103 family=2 entries=205 op=nft_register_chain pid=4535 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:26:05.476000 audit[4535]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=70436 a0=3 a1=fffff5f4c8a0 a2=0 a3=ffff87c1d6c0 items=0 ppid=2591 pid=4535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:26:05.476000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:26:05.580000 audit[4561]: NETFILTER_CFG table=filter:104 family=2 entries=6 op=nft_register_rule pid=4561 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:26:05.580000 audit[4561]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=fffff92416a0 a2=0 a3=ffffb23306c0 items=0 ppid=2591 pid=4561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:26:05.580000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:26:05.589000 audit[4561]: NETFILTER_CFG table=nat:105 family=2 entries=212 op=nft_register_chain pid=4561 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:26:05.589000 audit[4561]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=72324 a0=3 a1=fffff92416a0 a2=0 a3=ffffb23306c0 items=0 ppid=2591 pid=4561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:26:05.589000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:26:05.735322 kubelet[2315]: E0212 20:26:05.735275 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:06.336287 systemd-networkd[1586]: cali5ec59c6bf6e: Gained IPv6LL Feb 12 20:26:06.736503 kubelet[2315]: E0212 20:26:06.736443 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:07.737086 kubelet[2315]: E0212 20:26:07.737038 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:08.739040 kubelet[2315]: E0212 20:26:08.738936 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:09.740204 kubelet[2315]: E0212 20:26:09.740105 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:10.740616 kubelet[2315]: E0212 20:26:10.740547 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:11.741749 kubelet[2315]: E0212 20:26:11.741712 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:12.743412 kubelet[2315]: E0212 20:26:12.743342 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:13.744388 kubelet[2315]: E0212 20:26:13.744338 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:14.746041 kubelet[2315]: E0212 20:26:14.745946 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:15.746223 kubelet[2315]: E0212 20:26:15.746146 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:16.746355 kubelet[2315]: E0212 20:26:16.746283 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:17.747228 kubelet[2315]: E0212 20:26:17.747180 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:18.748078 kubelet[2315]: E0212 20:26:18.748007 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:19.748775 kubelet[2315]: E0212 20:26:19.748710 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:20.644496 kubelet[2315]: E0212 20:26:20.644427 2315 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:20.748915 kubelet[2315]: E0212 20:26:20.748870 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:21.750479 kubelet[2315]: E0212 20:26:21.750431 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:22.751884 kubelet[2315]: E0212 20:26:22.751839 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:23.753085 kubelet[2315]: E0212 20:26:23.753037 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:24.754208 kubelet[2315]: E0212 20:26:24.754109 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:25.168215 kubelet[2315]: E0212 20:26:25.167820 2315 controller.go:189] failed to update lease, error: Put "https://172.31.16.195:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.103?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 12 20:26:25.754345 kubelet[2315]: E0212 20:26:25.754272 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:26.755361 kubelet[2315]: E0212 20:26:26.755315 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:27.756707 kubelet[2315]: E0212 20:26:27.756666 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:28.757586 kubelet[2315]: E0212 20:26:28.757512 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:29.758289 kubelet[2315]: E0212 20:26:29.758220 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:30.758827 kubelet[2315]: E0212 20:26:30.758762 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:31.759013 kubelet[2315]: E0212 20:26:31.758903 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:32.760779 kubelet[2315]: E0212 20:26:32.760716 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:33.761788 kubelet[2315]: E0212 20:26:33.761743 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:34.763565 kubelet[2315]: E0212 20:26:34.763515 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:35.169270 kubelet[2315]: E0212 20:26:35.168865 2315 controller.go:189] failed to update lease, error: Put "https://172.31.16.195:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.103?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 12 20:26:35.765048 kubelet[2315]: E0212 20:26:35.765005 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:36.766614 kubelet[2315]: E0212 20:26:36.766568 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:37.768545 kubelet[2315]: E0212 20:26:37.768482 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:38.768692 kubelet[2315]: E0212 20:26:38.768613 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:39.769106 kubelet[2315]: E0212 20:26:39.769040 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:40.644416 kubelet[2315]: E0212 20:26:40.644351 2315 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:40.769551 kubelet[2315]: E0212 20:26:40.769479 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:41.461611 kubelet[2315]: E0212 20:26:41.461523 2315 controller.go:189] failed to update lease, error: Put "https://172.31.16.195:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.103?timeout=10s": unexpected EOF Feb 12 20:26:41.474446 kubelet[2315]: E0212 20:26:41.474404 2315 controller.go:189] failed to update lease, error: Put "https://172.31.16.195:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.103?timeout=10s": read tcp 172.31.16.103:52868->172.31.16.195:6443: read: connection reset by peer Feb 12 20:26:41.477016 kubelet[2315]: E0212 20:26:41.476387 2315 controller.go:189] failed to update lease, error: Put "https://172.31.16.195:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.103?timeout=10s": dial tcp 172.31.16.195:6443: connect: connection refused Feb 12 20:26:41.477016 kubelet[2315]: I0212 20:26:41.476456 2315 controller.go:116] failed to update lease using latest lease, fallback to ensure lease, err: failed 5 attempts to update lease Feb 12 20:26:41.480586 kubelet[2315]: E0212 20:26:41.480487 2315 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://172.31.16.195:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.103?timeout=10s": dial tcp 172.31.16.195:6443: connect: connection refused Feb 12 20:26:41.682279 kubelet[2315]: E0212 20:26:41.682209 2315 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://172.31.16.195:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.103?timeout=10s": dial tcp 172.31.16.195:6443: connect: connection refused Feb 12 20:26:41.770142 kubelet[2315]: E0212 20:26:41.770055 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:42.083620 kubelet[2315]: E0212 20:26:42.083469 2315 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://172.31.16.195:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.103?timeout=10s": dial tcp 172.31.16.195:6443: connect: connection refused Feb 12 20:26:42.770780 kubelet[2315]: E0212 20:26:42.770731 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:42.884912 kubelet[2315]: E0212 20:26:42.884846 2315 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://172.31.16.195:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.103?timeout=10s": dial tcp 172.31.16.195:6443: connect: connection refused Feb 12 20:26:43.772474 kubelet[2315]: E0212 20:26:43.772428 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:44.487308 kubelet[2315]: E0212 20:26:44.487221 2315 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: Get "https://172.31.16.195:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.103?timeout=10s": dial tcp 172.31.16.195:6443: connect: connection refused Feb 12 20:26:44.774848 kubelet[2315]: E0212 20:26:44.774411 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:45.420055 kubelet[2315]: E0212 20:26:45.420017 2315 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"172.31.16.103\": Get \"https://172.31.16.195:6443/api/v1/nodes/172.31.16.103?resourceVersion=0&timeout=10s\": dial tcp 172.31.16.195:6443: connect: connection refused" Feb 12 20:26:45.420767 kubelet[2315]: E0212 20:26:45.420716 2315 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"172.31.16.103\": Get \"https://172.31.16.195:6443/api/v1/nodes/172.31.16.103?timeout=10s\": dial tcp 172.31.16.195:6443: connect: connection refused" Feb 12 20:26:45.421162 kubelet[2315]: E0212 20:26:45.421119 2315 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"172.31.16.103\": Get \"https://172.31.16.195:6443/api/v1/nodes/172.31.16.103?timeout=10s\": dial tcp 172.31.16.195:6443: connect: connection refused" Feb 12 20:26:45.421548 kubelet[2315]: E0212 20:26:45.421502 2315 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"172.31.16.103\": Get \"https://172.31.16.195:6443/api/v1/nodes/172.31.16.103?timeout=10s\": dial tcp 172.31.16.195:6443: connect: connection refused" Feb 12 20:26:45.421951 kubelet[2315]: E0212 20:26:45.421905 2315 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"172.31.16.103\": Get \"https://172.31.16.195:6443/api/v1/nodes/172.31.16.103?timeout=10s\": dial tcp 172.31.16.195:6443: connect: connection refused" Feb 12 20:26:45.422107 kubelet[2315]: E0212 20:26:45.421947 2315 kubelet_node_status.go:527] "Unable to update node status" err="update node status exceeds retry count" Feb 12 20:26:45.775637 kubelet[2315]: E0212 20:26:45.775566 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:46.776378 kubelet[2315]: E0212 20:26:46.776257 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:47.776990 kubelet[2315]: E0212 20:26:47.776918 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:48.778008 kubelet[2315]: E0212 20:26:48.777928 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:49.778320 kubelet[2315]: E0212 20:26:49.778280 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:50.780093 kubelet[2315]: E0212 20:26:50.780018 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:51.780294 kubelet[2315]: E0212 20:26:51.780226 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:52.781052 kubelet[2315]: E0212 20:26:52.780996 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:53.781177 kubelet[2315]: E0212 20:26:53.781136 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:54.782716 kubelet[2315]: E0212 20:26:54.782671 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:55.783788 kubelet[2315]: E0212 20:26:55.783743 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:56.785028 kubelet[2315]: E0212 20:26:56.784911 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:57.689557 kubelet[2315]: E0212 20:26:57.689467 2315 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: Get "https://172.31.16.195:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.103?timeout=10s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 12 20:26:57.785368 kubelet[2315]: E0212 20:26:57.785315 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:58.793762 kubelet[2315]: E0212 20:26:58.786471 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:26:59.786724 kubelet[2315]: E0212 20:26:59.786678 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:00.643781 kubelet[2315]: E0212 20:27:00.643743 2315 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:00.787604 kubelet[2315]: E0212 20:27:00.787506 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:01.787755 kubelet[2315]: E0212 20:27:01.787714 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:02.788868 kubelet[2315]: E0212 20:27:02.788791 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:03.790372 kubelet[2315]: E0212 20:27:03.790302 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:04.791415 kubelet[2315]: E0212 20:27:04.791374 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:05.706876 kubelet[2315]: E0212 20:27:05.706813 2315 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"172.31.16.103\": Get \"https://172.31.16.195:6443/api/v1/nodes/172.31.16.103?resourceVersion=0&timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 12 20:27:05.792489 kubelet[2315]: E0212 20:27:05.792419 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 20:27:06.793503 kubelet[2315]: E0212 20:27:06.793208 2315 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"