Feb 12 19:23:30.825141 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 12 19:23:30.825162 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Feb 12 18:07:00 -00 2024 Feb 12 19:23:30.825170 kernel: efi: EFI v2.70 by EDK II Feb 12 19:23:30.825176 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Feb 12 19:23:30.825181 kernel: random: crng init done Feb 12 19:23:30.825186 kernel: ACPI: Early table checksum verification disabled Feb 12 19:23:30.825193 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Feb 12 19:23:30.825200 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 12 19:23:30.825205 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:23:30.825211 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:23:30.825216 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:23:30.825222 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:23:30.825227 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:23:30.825233 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:23:30.825241 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:23:30.825247 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:23:30.825253 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:23:30.825258 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 12 19:23:30.825264 kernel: NUMA: Failed to initialise from firmware Feb 12 19:23:30.825270 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 12 19:23:30.825276 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Feb 12 19:23:30.825281 kernel: Zone ranges: Feb 12 19:23:30.825287 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 12 19:23:30.825328 kernel: DMA32 empty Feb 12 19:23:30.825333 kernel: Normal empty Feb 12 19:23:30.825340 kernel: Movable zone start for each node Feb 12 19:23:30.825345 kernel: Early memory node ranges Feb 12 19:23:30.825351 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Feb 12 19:23:30.825357 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Feb 12 19:23:30.825363 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Feb 12 19:23:30.825369 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Feb 12 19:23:30.825375 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Feb 12 19:23:30.825380 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Feb 12 19:23:30.825387 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Feb 12 19:23:30.825392 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 12 19:23:30.825400 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 12 19:23:30.825405 kernel: psci: probing for conduit method from ACPI. Feb 12 19:23:30.825411 kernel: psci: PSCIv1.1 detected in firmware. Feb 12 19:23:30.825417 kernel: psci: Using standard PSCI v0.2 function IDs Feb 12 19:23:30.825423 kernel: psci: Trusted OS migration not required Feb 12 19:23:30.825432 kernel: psci: SMC Calling Convention v1.1 Feb 12 19:23:30.825438 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 12 19:23:30.825446 kernel: ACPI: SRAT not present Feb 12 19:23:30.825452 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 12 19:23:30.825458 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 12 19:23:30.825465 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 12 19:23:30.825471 kernel: Detected PIPT I-cache on CPU0 Feb 12 19:23:30.825477 kernel: CPU features: detected: GIC system register CPU interface Feb 12 19:23:30.825483 kernel: CPU features: detected: Hardware dirty bit management Feb 12 19:23:30.825489 kernel: CPU features: detected: Spectre-v4 Feb 12 19:23:30.825496 kernel: CPU features: detected: Spectre-BHB Feb 12 19:23:30.825503 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 12 19:23:30.825509 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 12 19:23:30.825515 kernel: CPU features: detected: ARM erratum 1418040 Feb 12 19:23:30.825521 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 12 19:23:30.825527 kernel: Policy zone: DMA Feb 12 19:23:30.825534 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=0a07ee1673be713cb46dc1305004c8854c4690dc8835a87e3bc71aa6c6a62e40 Feb 12 19:23:30.825541 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 19:23:30.825547 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 12 19:23:30.825554 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 12 19:23:30.825560 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 19:23:30.825566 kernel: Memory: 2459152K/2572288K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 113136K reserved, 0K cma-reserved) Feb 12 19:23:30.825574 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 12 19:23:30.825580 kernel: trace event string verifier disabled Feb 12 19:23:30.825586 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 12 19:23:30.825593 kernel: rcu: RCU event tracing is enabled. Feb 12 19:23:30.825599 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 12 19:23:30.825605 kernel: Trampoline variant of Tasks RCU enabled. Feb 12 19:23:30.825611 kernel: Tracing variant of Tasks RCU enabled. Feb 12 19:23:30.825617 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 19:23:30.825623 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 12 19:23:30.825629 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 12 19:23:30.825635 kernel: GICv3: 256 SPIs implemented Feb 12 19:23:30.825642 kernel: GICv3: 0 Extended SPIs implemented Feb 12 19:23:30.825649 kernel: GICv3: Distributor has no Range Selector support Feb 12 19:23:30.825655 kernel: Root IRQ handler: gic_handle_irq Feb 12 19:23:30.825661 kernel: GICv3: 16 PPIs implemented Feb 12 19:23:30.825667 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 12 19:23:30.825673 kernel: ACPI: SRAT not present Feb 12 19:23:30.825678 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 12 19:23:30.825685 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Feb 12 19:23:30.825691 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Feb 12 19:23:30.825697 kernel: GICv3: using LPI property table @0x00000000400d0000 Feb 12 19:23:30.825703 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Feb 12 19:23:30.825710 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 12 19:23:30.825718 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 12 19:23:30.825724 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 12 19:23:30.825730 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 12 19:23:30.825736 kernel: arm-pv: using stolen time PV Feb 12 19:23:30.825743 kernel: Console: colour dummy device 80x25 Feb 12 19:23:30.825749 kernel: ACPI: Core revision 20210730 Feb 12 19:23:30.825755 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 12 19:23:30.825762 kernel: pid_max: default: 32768 minimum: 301 Feb 12 19:23:30.825768 kernel: LSM: Security Framework initializing Feb 12 19:23:30.825774 kernel: SELinux: Initializing. Feb 12 19:23:30.825796 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 19:23:30.825802 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 19:23:30.825815 kernel: rcu: Hierarchical SRCU implementation. Feb 12 19:23:30.825822 kernel: Platform MSI: ITS@0x8080000 domain created Feb 12 19:23:30.825828 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 12 19:23:30.825834 kernel: Remapping and enabling EFI services. Feb 12 19:23:30.825840 kernel: smp: Bringing up secondary CPUs ... Feb 12 19:23:30.825847 kernel: Detected PIPT I-cache on CPU1 Feb 12 19:23:30.825853 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 12 19:23:30.825861 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Feb 12 19:23:30.825867 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 12 19:23:30.825873 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 12 19:23:30.825879 kernel: Detected PIPT I-cache on CPU2 Feb 12 19:23:30.825886 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 12 19:23:30.825892 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Feb 12 19:23:30.825898 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 12 19:23:30.825905 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 12 19:23:30.825911 kernel: Detected PIPT I-cache on CPU3 Feb 12 19:23:30.825917 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 12 19:23:30.825925 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Feb 12 19:23:30.825931 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 12 19:23:30.825937 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 12 19:23:30.825944 kernel: smp: Brought up 1 node, 4 CPUs Feb 12 19:23:30.825954 kernel: SMP: Total of 4 processors activated. Feb 12 19:23:30.825962 kernel: CPU features: detected: 32-bit EL0 Support Feb 12 19:23:30.825969 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 12 19:23:30.825976 kernel: CPU features: detected: Common not Private translations Feb 12 19:23:30.825982 kernel: CPU features: detected: CRC32 instructions Feb 12 19:23:30.825989 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 12 19:23:30.825995 kernel: CPU features: detected: LSE atomic instructions Feb 12 19:23:30.826002 kernel: CPU features: detected: Privileged Access Never Feb 12 19:23:30.826014 kernel: CPU features: detected: RAS Extension Support Feb 12 19:23:30.826021 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 12 19:23:30.826027 kernel: CPU: All CPU(s) started at EL1 Feb 12 19:23:30.826036 kernel: alternatives: patching kernel code Feb 12 19:23:30.826044 kernel: devtmpfs: initialized Feb 12 19:23:30.826050 kernel: KASLR enabled Feb 12 19:23:30.826057 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 19:23:30.826064 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 12 19:23:30.826071 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 19:23:30.826078 kernel: SMBIOS 3.0.0 present. Feb 12 19:23:30.826084 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Feb 12 19:23:30.826091 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 19:23:30.826097 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 12 19:23:30.826104 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 12 19:23:30.826112 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 12 19:23:30.826119 kernel: audit: initializing netlink subsys (disabled) Feb 12 19:23:30.826126 kernel: audit: type=2000 audit(0.129:1): state=initialized audit_enabled=0 res=1 Feb 12 19:23:30.826132 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 19:23:30.826139 kernel: cpuidle: using governor menu Feb 12 19:23:30.826146 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 12 19:23:30.826152 kernel: ASID allocator initialised with 32768 entries Feb 12 19:23:30.826159 kernel: ACPI: bus type PCI registered Feb 12 19:23:30.826165 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 19:23:30.826173 kernel: Serial: AMBA PL011 UART driver Feb 12 19:23:30.826180 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 12 19:23:30.826187 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 12 19:23:30.826193 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 19:23:30.826200 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 12 19:23:30.826206 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 19:23:30.826213 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 12 19:23:30.826219 kernel: ACPI: Added _OSI(Module Device) Feb 12 19:23:30.826226 kernel: ACPI: Added _OSI(Processor Device) Feb 12 19:23:30.826234 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 19:23:30.826241 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 19:23:30.826248 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 19:23:30.826254 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 19:23:30.826261 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 19:23:30.826267 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 12 19:23:30.826274 kernel: ACPI: Interpreter enabled Feb 12 19:23:30.826280 kernel: ACPI: Using GIC for interrupt routing Feb 12 19:23:30.826286 kernel: ACPI: MCFG table detected, 1 entries Feb 12 19:23:30.826300 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 12 19:23:30.826306 kernel: printk: console [ttyAMA0] enabled Feb 12 19:23:30.826313 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 12 19:23:30.826450 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 12 19:23:30.826517 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 12 19:23:30.826578 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 12 19:23:30.826638 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 12 19:23:30.826700 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 12 19:23:30.826709 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 12 19:23:30.826716 kernel: PCI host bridge to bus 0000:00 Feb 12 19:23:30.826784 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 12 19:23:30.826852 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 12 19:23:30.826906 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 12 19:23:30.826957 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 12 19:23:30.827069 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 12 19:23:30.827141 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 12 19:23:30.827203 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 12 19:23:30.827266 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 12 19:23:30.827364 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 12 19:23:30.827426 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 12 19:23:30.827487 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 12 19:23:30.827603 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 12 19:23:30.827665 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 12 19:23:30.827720 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 12 19:23:30.827773 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 12 19:23:30.827786 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 12 19:23:30.827793 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 12 19:23:30.827800 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 12 19:23:30.827815 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 12 19:23:30.827822 kernel: iommu: Default domain type: Translated Feb 12 19:23:30.827829 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 12 19:23:30.827836 kernel: vgaarb: loaded Feb 12 19:23:30.827842 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 19:23:30.827849 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 12 19:23:30.827856 kernel: PTP clock support registered Feb 12 19:23:30.827862 kernel: Registered efivars operations Feb 12 19:23:30.827869 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 12 19:23:30.827877 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 19:23:30.827884 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 19:23:30.827891 kernel: pnp: PnP ACPI init Feb 12 19:23:30.827967 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 12 19:23:30.827978 kernel: pnp: PnP ACPI: found 1 devices Feb 12 19:23:30.827984 kernel: NET: Registered PF_INET protocol family Feb 12 19:23:30.827991 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 12 19:23:30.827998 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 12 19:23:30.828005 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 19:23:30.828013 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 12 19:23:30.828020 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 12 19:23:30.828027 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 12 19:23:30.828033 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 19:23:30.828040 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 19:23:30.828047 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 19:23:30.828053 kernel: PCI: CLS 0 bytes, default 64 Feb 12 19:23:30.828060 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 12 19:23:30.828068 kernel: kvm [1]: HYP mode not available Feb 12 19:23:30.828075 kernel: Initialise system trusted keyrings Feb 12 19:23:30.828082 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 12 19:23:30.828088 kernel: Key type asymmetric registered Feb 12 19:23:30.828095 kernel: Asymmetric key parser 'x509' registered Feb 12 19:23:30.828102 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 19:23:30.828109 kernel: io scheduler mq-deadline registered Feb 12 19:23:30.828115 kernel: io scheduler kyber registered Feb 12 19:23:30.828122 kernel: io scheduler bfq registered Feb 12 19:23:30.828129 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 12 19:23:30.828137 kernel: ACPI: button: Power Button [PWRB] Feb 12 19:23:30.828144 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 12 19:23:30.828206 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 12 19:23:30.828215 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 19:23:30.828221 kernel: thunder_xcv, ver 1.0 Feb 12 19:23:30.828228 kernel: thunder_bgx, ver 1.0 Feb 12 19:23:30.828235 kernel: nicpf, ver 1.0 Feb 12 19:23:30.828241 kernel: nicvf, ver 1.0 Feb 12 19:23:30.828334 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 12 19:23:30.828397 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-12T19:23:30 UTC (1707765810) Feb 12 19:23:30.828410 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 12 19:23:30.828417 kernel: NET: Registered PF_INET6 protocol family Feb 12 19:23:30.828424 kernel: Segment Routing with IPv6 Feb 12 19:23:30.828431 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 19:23:30.828437 kernel: NET: Registered PF_PACKET protocol family Feb 12 19:23:30.828444 kernel: Key type dns_resolver registered Feb 12 19:23:30.828450 kernel: registered taskstats version 1 Feb 12 19:23:30.828459 kernel: Loading compiled-in X.509 certificates Feb 12 19:23:30.828466 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: c8c3faa6fd8ae0112832fff0e3d0e58448a7eb6c' Feb 12 19:23:30.828472 kernel: Key type .fscrypt registered Feb 12 19:23:30.828479 kernel: Key type fscrypt-provisioning registered Feb 12 19:23:30.828485 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 12 19:23:30.828492 kernel: ima: Allocated hash algorithm: sha1 Feb 12 19:23:30.828499 kernel: ima: No architecture policies found Feb 12 19:23:30.828505 kernel: Freeing unused kernel memory: 34688K Feb 12 19:23:30.828514 kernel: Run /init as init process Feb 12 19:23:30.828520 kernel: with arguments: Feb 12 19:23:30.828527 kernel: /init Feb 12 19:23:30.828533 kernel: with environment: Feb 12 19:23:30.828540 kernel: HOME=/ Feb 12 19:23:30.828546 kernel: TERM=linux Feb 12 19:23:30.828553 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 19:23:30.828561 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:23:30.828570 systemd[1]: Detected virtualization kvm. Feb 12 19:23:30.828579 systemd[1]: Detected architecture arm64. Feb 12 19:23:30.828586 systemd[1]: Running in initrd. Feb 12 19:23:30.828593 systemd[1]: No hostname configured, using default hostname. Feb 12 19:23:30.828600 systemd[1]: Hostname set to . Feb 12 19:23:30.828608 systemd[1]: Initializing machine ID from VM UUID. Feb 12 19:23:30.828615 systemd[1]: Queued start job for default target initrd.target. Feb 12 19:23:30.828622 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:23:30.828629 systemd[1]: Reached target cryptsetup.target. Feb 12 19:23:30.828637 systemd[1]: Reached target paths.target. Feb 12 19:23:30.828644 systemd[1]: Reached target slices.target. Feb 12 19:23:30.828651 systemd[1]: Reached target swap.target. Feb 12 19:23:30.828658 systemd[1]: Reached target timers.target. Feb 12 19:23:30.828666 systemd[1]: Listening on iscsid.socket. Feb 12 19:23:30.828673 systemd[1]: Listening on iscsiuio.socket. Feb 12 19:23:30.828680 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 19:23:30.828689 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 19:23:30.828696 systemd[1]: Listening on systemd-journald.socket. Feb 12 19:23:30.828703 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:23:30.828710 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:23:30.828717 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:23:30.828724 systemd[1]: Reached target sockets.target. Feb 12 19:23:30.828731 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:23:30.828738 systemd[1]: Finished network-cleanup.service. Feb 12 19:23:30.828746 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 19:23:30.828754 systemd[1]: Starting systemd-journald.service... Feb 12 19:23:30.828761 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:23:30.828768 systemd[1]: Starting systemd-resolved.service... Feb 12 19:23:30.828776 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 19:23:30.828783 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:23:30.828790 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 19:23:30.828797 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 19:23:30.828810 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 19:23:30.828822 systemd-journald[290]: Journal started Feb 12 19:23:30.828866 systemd-journald[290]: Runtime Journal (/run/log/journal/98e53e4e6afb48d49022bbe541ceb23c) is 6.0M, max 48.7M, 42.6M free. Feb 12 19:23:30.820697 systemd-modules-load[291]: Inserted module 'overlay' Feb 12 19:23:30.832448 systemd[1]: Started systemd-journald.service. Feb 12 19:23:30.832477 kernel: audit: type=1130 audit(1707765810.828:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:30.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:30.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:30.834175 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 19:23:30.836573 kernel: audit: type=1130 audit(1707765810.833:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:30.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:30.840307 kernel: audit: type=1130 audit(1707765810.837:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:30.840386 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 19:23:30.848094 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 19:23:30.848770 systemd-modules-load[291]: Inserted module 'br_netfilter' Feb 12 19:23:30.849865 kernel: Bridge firewalling registered Feb 12 19:23:30.851777 systemd-resolved[292]: Positive Trust Anchors: Feb 12 19:23:30.851794 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:23:30.851829 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:23:30.857435 systemd-resolved[292]: Defaulting to hostname 'linux'. Feb 12 19:23:30.858446 systemd[1]: Started systemd-resolved.service. Feb 12 19:23:30.863507 kernel: SCSI subsystem initialized Feb 12 19:23:30.863531 kernel: audit: type=1130 audit(1707765810.859:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:30.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:30.860298 systemd[1]: Reached target nss-lookup.target. Feb 12 19:23:30.870327 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 19:23:30.870382 kernel: device-mapper: uevent: version 1.0.3 Feb 12 19:23:30.870393 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 19:23:30.873053 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 19:23:30.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:30.875015 systemd[1]: Starting dracut-cmdline.service... Feb 12 19:23:30.878008 kernel: audit: type=1130 audit(1707765810.873:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:30.875225 systemd-modules-load[291]: Inserted module 'dm_multipath' Feb 12 19:23:30.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:30.877691 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:23:30.883398 kernel: audit: type=1130 audit(1707765810.878:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:30.879699 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:23:30.888138 dracut-cmdline[309]: dracut-dracut-053 Feb 12 19:23:30.890272 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:23:30.891905 dracut-cmdline[309]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=0a07ee1673be713cb46dc1305004c8854c4690dc8835a87e3bc71aa6c6a62e40 Feb 12 19:23:30.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:30.898377 kernel: audit: type=1130 audit(1707765810.893:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:30.971334 kernel: Loading iSCSI transport class v2.0-870. Feb 12 19:23:30.982323 kernel: iscsi: registered transport (tcp) Feb 12 19:23:31.000310 kernel: iscsi: registered transport (qla4xxx) Feb 12 19:23:31.000327 kernel: QLogic iSCSI HBA Driver Feb 12 19:23:31.066344 systemd[1]: Finished dracut-cmdline.service. Feb 12 19:23:31.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:31.068285 systemd[1]: Starting dracut-pre-udev.service... Feb 12 19:23:31.071093 kernel: audit: type=1130 audit(1707765811.066:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:31.121317 kernel: raid6: neonx8 gen() 13615 MB/s Feb 12 19:23:31.138308 kernel: raid6: neonx8 xor() 10690 MB/s Feb 12 19:23:31.155305 kernel: raid6: neonx4 gen() 13466 MB/s Feb 12 19:23:31.172307 kernel: raid6: neonx4 xor() 11143 MB/s Feb 12 19:23:31.189302 kernel: raid6: neonx2 gen() 12903 MB/s Feb 12 19:23:31.206305 kernel: raid6: neonx2 xor() 10238 MB/s Feb 12 19:23:31.223306 kernel: raid6: neonx1 gen() 10453 MB/s Feb 12 19:23:31.240313 kernel: raid6: neonx1 xor() 8693 MB/s Feb 12 19:23:31.257310 kernel: raid6: int64x8 gen() 6269 MB/s Feb 12 19:23:31.274311 kernel: raid6: int64x8 xor() 3529 MB/s Feb 12 19:23:31.291309 kernel: raid6: int64x4 gen() 7218 MB/s Feb 12 19:23:31.308310 kernel: raid6: int64x4 xor() 3837 MB/s Feb 12 19:23:31.325309 kernel: raid6: int64x2 gen() 6136 MB/s Feb 12 19:23:31.342314 kernel: raid6: int64x2 xor() 3277 MB/s Feb 12 19:23:31.359313 kernel: raid6: int64x1 gen() 4984 MB/s Feb 12 19:23:31.376634 kernel: raid6: int64x1 xor() 2622 MB/s Feb 12 19:23:31.376654 kernel: raid6: using algorithm neonx8 gen() 13615 MB/s Feb 12 19:23:31.376664 kernel: raid6: .... xor() 10690 MB/s, rmw enabled Feb 12 19:23:31.376672 kernel: raid6: using neon recovery algorithm Feb 12 19:23:31.389555 kernel: xor: measuring software checksum speed Feb 12 19:23:31.389590 kernel: 8regs : 17293 MB/sec Feb 12 19:23:31.390441 kernel: 32regs : 20749 MB/sec Feb 12 19:23:31.391644 kernel: arm64_neon : 27939 MB/sec Feb 12 19:23:31.391656 kernel: xor: using function: arm64_neon (27939 MB/sec) Feb 12 19:23:31.473324 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 12 19:23:31.489157 systemd[1]: Finished dracut-pre-udev.service. Feb 12 19:23:31.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:31.492331 kernel: audit: type=1130 audit(1707765811.489:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:31.491000 audit: BPF prog-id=7 op=LOAD Feb 12 19:23:31.493000 audit: BPF prog-id=8 op=LOAD Feb 12 19:23:31.495066 systemd[1]: Starting systemd-udevd.service... Feb 12 19:23:31.509052 systemd-udevd[492]: Using default interface naming scheme 'v252'. Feb 12 19:23:31.512569 systemd[1]: Started systemd-udevd.service. Feb 12 19:23:31.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:31.514538 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 19:23:31.534198 dracut-pre-trigger[498]: rd.md=0: removing MD RAID activation Feb 12 19:23:31.562211 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 19:23:31.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:31.563923 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:23:31.600372 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:23:31.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:31.633890 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 12 19:23:31.638310 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 12 19:23:31.638354 kernel: GPT:9289727 != 19775487 Feb 12 19:23:31.638364 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 12 19:23:31.638373 kernel: GPT:9289727 != 19775487 Feb 12 19:23:31.638381 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 12 19:23:31.638389 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:23:31.660313 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (541) Feb 12 19:23:31.665241 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 19:23:31.668541 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 19:23:31.669771 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 19:23:31.674698 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 19:23:31.681084 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:23:31.683099 systemd[1]: Starting disk-uuid.service... Feb 12 19:23:31.690258 disk-uuid[560]: Primary Header is updated. Feb 12 19:23:31.690258 disk-uuid[560]: Secondary Entries is updated. Feb 12 19:23:31.690258 disk-uuid[560]: Secondary Header is updated. Feb 12 19:23:31.693663 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:23:32.718309 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:23:32.718443 disk-uuid[561]: The operation has completed successfully. Feb 12 19:23:32.751408 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 19:23:32.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:32.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:32.751513 systemd[1]: Finished disk-uuid.service. Feb 12 19:23:32.756005 systemd[1]: Starting verity-setup.service... Feb 12 19:23:32.775328 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 12 19:23:32.807969 systemd[1]: Found device dev-mapper-usr.device. Feb 12 19:23:32.810220 systemd[1]: Mounting sysusr-usr.mount... Feb 12 19:23:32.812317 systemd[1]: Finished verity-setup.service. Feb 12 19:23:32.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:32.865316 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 19:23:32.865808 systemd[1]: Mounted sysusr-usr.mount. Feb 12 19:23:32.866716 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 12 19:23:32.867503 systemd[1]: Starting ignition-setup.service... Feb 12 19:23:32.869858 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 19:23:32.881728 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 12 19:23:32.881783 kernel: BTRFS info (device vda6): using free space tree Feb 12 19:23:32.881793 kernel: BTRFS info (device vda6): has skinny extents Feb 12 19:23:32.898015 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 19:23:32.908581 systemd[1]: Finished ignition-setup.service. Feb 12 19:23:32.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:32.910412 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 19:23:32.966818 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 19:23:32.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:32.968000 audit: BPF prog-id=9 op=LOAD Feb 12 19:23:32.969961 systemd[1]: Starting systemd-networkd.service... Feb 12 19:23:32.999718 systemd-networkd[731]: lo: Link UP Feb 12 19:23:32.999730 systemd-networkd[731]: lo: Gained carrier Feb 12 19:23:33.000123 systemd-networkd[731]: Enumeration completed Feb 12 19:23:33.000276 systemd[1]: Started systemd-networkd.service. Feb 12 19:23:33.000317 systemd-networkd[731]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:23:33.001301 systemd[1]: Reached target network.target. Feb 12 19:23:33.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:33.002149 systemd-networkd[731]: eth0: Link UP Feb 12 19:23:33.002152 systemd-networkd[731]: eth0: Gained carrier Feb 12 19:23:33.003402 systemd[1]: Starting iscsiuio.service... Feb 12 19:23:33.020547 systemd[1]: Started iscsiuio.service. Feb 12 19:23:33.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:33.022530 systemd[1]: Starting iscsid.service... Feb 12 19:23:33.028493 iscsid[736]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:23:33.028493 iscsid[736]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 12 19:23:33.028493 iscsid[736]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 19:23:33.028493 iscsid[736]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 19:23:33.028493 iscsid[736]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:23:33.028493 iscsid[736]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 19:23:33.036449 systemd-networkd[731]: eth0: DHCPv4 address 10.0.0.89/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 12 19:23:33.037757 systemd[1]: Started iscsid.service. Feb 12 19:23:33.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:33.039824 systemd[1]: Starting dracut-initqueue.service... Feb 12 19:23:33.053516 systemd[1]: Finished dracut-initqueue.service. Feb 12 19:23:33.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:33.054523 systemd[1]: Reached target remote-fs-pre.target. Feb 12 19:23:33.055875 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:23:33.057223 systemd[1]: Reached target remote-fs.target. Feb 12 19:23:33.059426 systemd[1]: Starting dracut-pre-mount.service... Feb 12 19:23:33.069762 systemd[1]: Finished dracut-pre-mount.service. Feb 12 19:23:33.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:33.100618 ignition[671]: Ignition 2.14.0 Feb 12 19:23:33.100633 ignition[671]: Stage: fetch-offline Feb 12 19:23:33.100691 ignition[671]: no configs at "/usr/lib/ignition/base.d" Feb 12 19:23:33.100701 ignition[671]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:23:33.100867 ignition[671]: parsed url from cmdline: "" Feb 12 19:23:33.100871 ignition[671]: no config URL provided Feb 12 19:23:33.100876 ignition[671]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 19:23:33.100884 ignition[671]: no config at "/usr/lib/ignition/user.ign" Feb 12 19:23:33.100905 ignition[671]: op(1): [started] loading QEMU firmware config module Feb 12 19:23:33.100910 ignition[671]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 12 19:23:33.110926 ignition[671]: op(1): [finished] loading QEMU firmware config module Feb 12 19:23:33.131966 ignition[671]: parsing config with SHA512: e983ae32d03dc1d53c9a396d9ef8bc2edf55ca82293ae299b96d945e81b56e8fc2d76e915128509f8c52e849426b6bb7b9ea72a2a7e6c7beb9ed17a64058b97c Feb 12 19:23:33.157109 unknown[671]: fetched base config from "system" Feb 12 19:23:33.157122 unknown[671]: fetched user config from "qemu" Feb 12 19:23:33.157669 ignition[671]: fetch-offline: fetch-offline passed Feb 12 19:23:33.158677 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 19:23:33.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:33.157734 ignition[671]: Ignition finished successfully Feb 12 19:23:33.159779 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 12 19:23:33.160614 systemd[1]: Starting ignition-kargs.service... Feb 12 19:23:33.170210 ignition[757]: Ignition 2.14.0 Feb 12 19:23:33.170220 ignition[757]: Stage: kargs Feb 12 19:23:33.170346 ignition[757]: no configs at "/usr/lib/ignition/base.d" Feb 12 19:23:33.170358 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:23:33.173026 systemd[1]: Finished ignition-kargs.service. Feb 12 19:23:33.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:33.171445 ignition[757]: kargs: kargs passed Feb 12 19:23:33.171496 ignition[757]: Ignition finished successfully Feb 12 19:23:33.175221 systemd[1]: Starting ignition-disks.service... Feb 12 19:23:33.183066 ignition[763]: Ignition 2.14.0 Feb 12 19:23:33.183075 ignition[763]: Stage: disks Feb 12 19:23:33.183191 ignition[763]: no configs at "/usr/lib/ignition/base.d" Feb 12 19:23:33.183202 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:23:33.184560 ignition[763]: disks: disks passed Feb 12 19:23:33.184613 ignition[763]: Ignition finished successfully Feb 12 19:23:33.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:33.186461 systemd[1]: Finished ignition-disks.service. Feb 12 19:23:33.187438 systemd[1]: Reached target initrd-root-device.target. Feb 12 19:23:33.188420 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:23:33.189409 systemd[1]: Reached target local-fs.target. Feb 12 19:23:33.190600 systemd[1]: Reached target sysinit.target. Feb 12 19:23:33.191615 systemd[1]: Reached target basic.target. Feb 12 19:23:33.193571 systemd[1]: Starting systemd-fsck-root.service... Feb 12 19:23:33.206056 systemd-fsck[771]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 12 19:23:33.209968 systemd[1]: Finished systemd-fsck-root.service. Feb 12 19:23:33.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:33.211637 systemd[1]: Mounting sysroot.mount... Feb 12 19:23:33.223320 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 19:23:33.224041 systemd[1]: Mounted sysroot.mount. Feb 12 19:23:33.224902 systemd[1]: Reached target initrd-root-fs.target. Feb 12 19:23:33.227614 systemd[1]: Mounting sysroot-usr.mount... Feb 12 19:23:33.228678 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 12 19:23:33.228729 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 19:23:33.228757 systemd[1]: Reached target ignition-diskful.target. Feb 12 19:23:33.230702 systemd[1]: Mounted sysroot-usr.mount. Feb 12 19:23:33.233452 systemd[1]: Starting initrd-setup-root.service... Feb 12 19:23:33.238001 initrd-setup-root[781]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 19:23:33.243011 initrd-setup-root[789]: cut: /sysroot/etc/group: No such file or directory Feb 12 19:23:33.246606 initrd-setup-root[797]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 19:23:33.251301 initrd-setup-root[805]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 19:23:33.280980 systemd[1]: Finished initrd-setup-root.service. Feb 12 19:23:33.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:33.283041 systemd[1]: Starting ignition-mount.service... Feb 12 19:23:33.285037 systemd[1]: Starting sysroot-boot.service... Feb 12 19:23:33.291636 bash[822]: umount: /sysroot/usr/share/oem: not mounted. Feb 12 19:23:33.301945 ignition[824]: INFO : Ignition 2.14.0 Feb 12 19:23:33.301945 ignition[824]: INFO : Stage: mount Feb 12 19:23:33.304413 ignition[824]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 19:23:33.304413 ignition[824]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:23:33.304413 ignition[824]: INFO : mount: mount passed Feb 12 19:23:33.304413 ignition[824]: INFO : Ignition finished successfully Feb 12 19:23:33.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:33.304611 systemd[1]: Finished ignition-mount.service. Feb 12 19:23:33.308500 systemd[1]: Finished sysroot-boot.service. Feb 12 19:23:33.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:33.824411 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 19:23:33.832320 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (833) Feb 12 19:23:33.835579 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 12 19:23:33.835597 kernel: BTRFS info (device vda6): using free space tree Feb 12 19:23:33.835607 kernel: BTRFS info (device vda6): has skinny extents Feb 12 19:23:33.841269 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 19:23:33.842925 systemd[1]: Starting ignition-files.service... Feb 12 19:23:33.857654 ignition[853]: INFO : Ignition 2.14.0 Feb 12 19:23:33.857654 ignition[853]: INFO : Stage: files Feb 12 19:23:33.858908 ignition[853]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 19:23:33.858908 ignition[853]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:23:33.858908 ignition[853]: DEBUG : files: compiled without relabeling support, skipping Feb 12 19:23:33.861512 ignition[853]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 19:23:33.861512 ignition[853]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 19:23:33.864266 ignition[853]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 19:23:33.865188 ignition[853]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 19:23:33.866356 unknown[853]: wrote ssh authorized keys file for user: core Feb 12 19:23:33.867196 ignition[853]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 19:23:33.867196 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 12 19:23:33.867196 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 12 19:23:33.867196 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 12 19:23:33.867196 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Feb 12 19:23:34.144067 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 12 19:23:34.334446 ignition[853]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Feb 12 19:23:34.334446 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 12 19:23:34.334446 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 12 19:23:34.334446 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-arm64.tar.gz: attempt #1 Feb 12 19:23:34.509553 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 12 19:23:34.629745 ignition[853]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 4c7e4541123cbd6f1d6fec1f827395cd58d65716c0998de790f965485738b6d6257c0dc46fd7f66403166c299f6d5bf9ff30b6e1ff9afbb071f17005e834518c Feb 12 19:23:34.631985 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 12 19:23:34.631985 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:23:34.631985 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubeadm: attempt #1 Feb 12 19:23:34.680050 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 12 19:23:34.927411 systemd-networkd[731]: eth0: Gained IPv6LL Feb 12 19:23:34.945531 ignition[853]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 46c9f489062bdb84574703f7339d140d7e42c9c71b367cd860071108a3c1d38fabda2ef69f9c0ff88f7c80e88d38f96ab2248d4c9a6c9c60b0a4c20fd640d0db Feb 12 19:23:34.947845 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:23:34.947845 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:23:34.947845 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubelet: attempt #1 Feb 12 19:23:34.968926 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 12 19:23:35.682245 ignition[853]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 0e4ee1f23bf768c49d09beb13a6b5fad6efc8e3e685e7c5610188763e3af55923fb46158b5e76973a0f9a055f9b30d525b467c53415f965536adc2f04d9cf18d Feb 12 19:23:35.684516 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:23:35.684516 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/install.sh" Feb 12 19:23:35.684516 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 19:23:35.684516 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:23:35.684516 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:23:35.684516 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:23:35.684516 ignition[853]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:23:35.684516 ignition[853]: INFO : files: op(b): [started] processing unit "containerd.service" Feb 12 19:23:35.684516 ignition[853]: INFO : files: op(b): op(c): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 12 19:23:35.684516 ignition[853]: INFO : files: op(b): op(c): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 12 19:23:35.684516 ignition[853]: INFO : files: op(b): [finished] processing unit "containerd.service" Feb 12 19:23:35.684516 ignition[853]: INFO : files: op(d): [started] processing unit "prepare-cni-plugins.service" Feb 12 19:23:35.684516 ignition[853]: INFO : files: op(d): op(e): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:23:35.684516 ignition[853]: INFO : files: op(d): op(e): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:23:35.684516 ignition[853]: INFO : files: op(d): [finished] processing unit "prepare-cni-plugins.service" Feb 12 19:23:35.684516 ignition[853]: INFO : files: op(f): [started] processing unit "prepare-critools.service" Feb 12 19:23:35.711314 ignition[853]: INFO : files: op(f): op(10): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:23:35.711314 ignition[853]: INFO : files: op(f): op(10): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:23:35.711314 ignition[853]: INFO : files: op(f): [finished] processing unit "prepare-critools.service" Feb 12 19:23:35.711314 ignition[853]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Feb 12 19:23:35.711314 ignition[853]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 12 19:23:35.711314 ignition[853]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 12 19:23:35.711314 ignition[853]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Feb 12 19:23:35.711314 ignition[853]: INFO : files: op(13): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:23:35.711314 ignition[853]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:23:35.711314 ignition[853]: INFO : files: op(14): [started] setting preset to enabled for "prepare-critools.service" Feb 12 19:23:35.711314 ignition[853]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 19:23:35.711314 ignition[853]: INFO : files: op(15): [started] setting preset to disabled for "coreos-metadata.service" Feb 12 19:23:35.711314 ignition[853]: INFO : files: op(15): op(16): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 12 19:23:35.765578 ignition[853]: INFO : files: op(15): op(16): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 12 19:23:35.766778 ignition[853]: INFO : files: op(15): [finished] setting preset to disabled for "coreos-metadata.service" Feb 12 19:23:35.766778 ignition[853]: INFO : files: createResultFile: createFiles: op(17): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:23:35.766778 ignition[853]: INFO : files: createResultFile: createFiles: op(17): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:23:35.766778 ignition[853]: INFO : files: files passed Feb 12 19:23:35.766778 ignition[853]: INFO : Ignition finished successfully Feb 12 19:23:35.777419 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 12 19:23:35.777441 kernel: audit: type=1130 audit(1707765815.768:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:35.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:35.767982 systemd[1]: Finished ignition-files.service. Feb 12 19:23:35.782049 kernel: audit: type=1130 audit(1707765815.777:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:35.782069 kernel: audit: type=1131 audit(1707765815.777:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:35.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:35.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:35.770127 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 19:23:35.785391 kernel: audit: type=1130 audit(1707765815.782:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:35.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:35.771586 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 19:23:35.787577 initrd-setup-root-after-ignition[879]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 12 19:23:35.772276 systemd[1]: Starting ignition-quench.service... Feb 12 19:23:35.790573 initrd-setup-root-after-ignition[881]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 19:23:35.776157 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 19:23:35.776758 systemd[1]: Finished ignition-quench.service. Feb 12 19:23:35.781843 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 19:23:35.783065 systemd[1]: Reached target ignition-complete.target. Feb 12 19:23:35.786956 systemd[1]: Starting initrd-parse-etc.service... Feb 12 19:23:35.808026 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 19:23:35.808149 systemd[1]: Finished initrd-parse-etc.service. Feb 12 19:23:35.815120 kernel: audit: type=1130 audit(1707765815.810:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:35.815145 kernel: audit: type=1131 audit(1707765815.810:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:35.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:35.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:35.811433 systemd[1]: Reached target initrd-fs.target. Feb 12 19:23:35.815688 systemd[1]: Reached target initrd.target. Feb 12 19:23:35.816638 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 19:23:35.817494 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 19:23:35.828975 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 19:23:35.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:35.830637 systemd[1]: Starting initrd-cleanup.service... Feb 12 19:23:35.833188 kernel: audit: type=1130 audit(1707765815.829:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:35.840510 systemd[1]: Stopped target nss-lookup.target. Feb 12 19:23:35.841416 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 19:23:35.842705 systemd[1]: Stopped target timers.target. Feb 12 19:23:35.843854 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 19:23:35.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:35.844059 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 19:23:35.848325 kernel: audit: type=1131 audit(1707765815.844:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:35.845124 systemd[1]: Stopped target initrd.target. Feb 12 19:23:35.847959 systemd[1]: Stopped target basic.target. Feb 12 19:23:35.849067 systemd[1]: Stopped target ignition-complete.target. Feb 12 19:23:35.850223 systemd[1]: Stopped target ignition-diskful.target. Feb 12 19:23:35.851401 systemd[1]: Stopped target initrd-root-device.target. Feb 12 19:23:35.852673 systemd[1]: Stopped target remote-fs.target. Feb 12 19:23:35.853843 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 19:23:35.855068 systemd[1]: Stopped target sysinit.target. Feb 12 19:23:35.856137 systemd[1]: Stopped target local-fs.target. Feb 12 19:23:35.857260 systemd[1]: Stopped target local-fs-pre.target. Feb 12 19:23:35.858400 systemd[1]: Stopped target swap.target. Feb 12 19:23:35.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:35.859447 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 19:23:35.863858 kernel: audit: type=1131 audit(1707765815.860:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:35.859572 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 19:23:35.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:35.860708 systemd[1]: Stopped target cryptsetup.target. Feb 12 19:23:35.867826 kernel: audit: type=1131 audit(1707765815.864:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:35.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:35.863351 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 19:23:35.863469 systemd[1]: Stopped dracut-initqueue.service. Feb 12 19:23:35.864704 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 19:23:35.864817 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 19:23:35.867549 systemd[1]: Stopped target paths.target. Feb 12 19:23:35.868538 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 19:23:35.873321 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 19:23:35.874261 systemd[1]: Stopped target slices.target. Feb 12 19:23:35.875486 systemd[1]: Stopped target sockets.target. Feb 12 19:23:35.876603 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 19:23:35.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:35.876734 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 19:23:35.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:35.877995 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 19:23:35.878096 systemd[1]: Stopped ignition-files.service. Feb 12 19:23:35.881538 iscsid[736]: iscsid shutting down. Feb 12 19:23:35.880161 systemd[1]: Stopping ignition-mount.service... Feb 12 19:23:35.883425 systemd[1]: Stopping iscsid.service... Feb 12 19:23:35.883943 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 19:23:35.884068 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 19:23:35.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:35.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:35.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:35.889234 ignition[894]: INFO : Ignition 2.14.0 Feb 12 19:23:35.889234 ignition[894]: INFO : Stage: umount Feb 12 19:23:35.889234 ignition[894]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 19:23:35.889234 ignition[894]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 19:23:35.889234 ignition[894]: INFO : umount: umount passed Feb 12 19:23:35.889234 ignition[894]: INFO : Ignition finished successfully Feb 12 19:23:35.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:35.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:35.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:35.885561 systemd[1]: Stopping sysroot-boot.service... Feb 12 19:23:35.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:35.886728 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 19:23:35.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:35.886887 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 19:23:35.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:35.887627 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 19:23:35.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:35.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:35.887727 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 19:23:35.890487 systemd[1]: iscsid.service: Deactivated successfully. Feb 12 19:23:35.890595 systemd[1]: Stopped iscsid.service. Feb 12 19:23:35.891992 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 19:23:35.892077 systemd[1]: Stopped ignition-mount.service. Feb 12 19:23:35.893847 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 19:23:35.893920 systemd[1]: Closed iscsid.socket. Feb 12 19:23:35.895237 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 19:23:35.895397 systemd[1]: Stopped ignition-disks.service. Feb 12 19:23:35.896944 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 19:23:35.896994 systemd[1]: Stopped ignition-kargs.service. Feb 12 19:23:35.898276 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 19:23:35.898402 systemd[1]: Stopped ignition-setup.service. Feb 12 19:23:35.899678 systemd[1]: Stopping iscsiuio.service... Feb 12 19:23:35.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:35.902807 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 12 19:23:35.903279 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 12 19:23:35.903389 systemd[1]: Stopped iscsiuio.service. Feb 12 19:23:35.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:35.904328 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 19:23:35.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:35.904418 systemd[1]: Finished initrd-cleanup.service. Feb 12 19:23:35.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:35.906265 systemd[1]: Stopped target network.target. Feb 12 19:23:35.907457 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 19:23:35.907493 systemd[1]: Closed iscsiuio.socket. Feb 12 19:23:35.908858 systemd[1]: Stopping systemd-networkd.service... Feb 12 19:23:35.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:35.910054 systemd[1]: Stopping systemd-resolved.service... Feb 12 19:23:35.915832 systemd-networkd[731]: eth0: DHCPv6 lease lost Feb 12 19:23:35.936000 audit: BPF prog-id=9 op=UNLOAD Feb 12 19:23:35.916871 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 19:23:35.916978 systemd[1]: Stopped systemd-networkd.service. Feb 12 19:23:35.938000 audit: BPF prog-id=6 op=UNLOAD Feb 12 19:23:35.918615 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 19:23:35.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:35.918646 systemd[1]: Closed systemd-networkd.socket. Feb 12 19:23:35.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:35.920666 systemd[1]: Stopping network-cleanup.service... Feb 12 19:23:35.921587 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 19:23:35.921673 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 19:23:35.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:35.923752 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 19:23:35.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:35.923821 systemd[1]: Stopped systemd-sysctl.service. Feb 12 19:23:35.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:35.926911 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 19:23:35.926969 systemd[1]: Stopped systemd-modules-load.service. Feb 12 19:23:35.927966 systemd[1]: Stopping systemd-udevd.service... Feb 12 19:23:35.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:35.933000 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 12 19:23:35.933658 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 19:23:35.933762 systemd[1]: Stopped systemd-resolved.service. Feb 12 19:23:35.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:35.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:35.939879 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 19:23:35.940049 systemd[1]: Stopped systemd-udevd.service. Feb 12 19:23:35.941368 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 19:23:35.941473 systemd[1]: Stopped network-cleanup.service. Feb 12 19:23:35.942541 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 19:23:35.942577 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 19:23:35.944438 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 19:23:35.944495 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 19:23:35.945898 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 19:23:35.945958 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 19:23:35.948137 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 19:23:35.948186 systemd[1]: Stopped dracut-cmdline.service. Feb 12 19:23:35.951146 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 19:23:35.951194 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 19:23:35.954465 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 19:23:35.956409 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 19:23:35.956486 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 19:23:35.960761 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 19:23:35.960884 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 19:23:36.004053 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 19:23:36.004159 systemd[1]: Stopped sysroot-boot.service. Feb 12 19:23:36.006356 systemd[1]: Reached target initrd-switch-root.target. Feb 12 19:23:36.007506 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 19:23:36.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:36.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:36.007561 systemd[1]: Stopped initrd-setup-root.service. Feb 12 19:23:36.009730 systemd[1]: Starting initrd-switch-root.service... Feb 12 19:23:36.016969 systemd[1]: Switching root. Feb 12 19:23:36.017000 audit: BPF prog-id=5 op=UNLOAD Feb 12 19:23:36.017000 audit: BPF prog-id=4 op=UNLOAD Feb 12 19:23:36.017000 audit: BPF prog-id=3 op=UNLOAD Feb 12 19:23:36.020000 audit: BPF prog-id=8 op=UNLOAD Feb 12 19:23:36.020000 audit: BPF prog-id=7 op=UNLOAD Feb 12 19:23:36.035941 systemd-journald[290]: Journal stopped Feb 12 19:23:38.340281 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Feb 12 19:23:38.340354 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 19:23:38.340370 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 19:23:38.340380 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 19:23:38.340390 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 19:23:38.340402 kernel: SELinux: policy capability open_perms=1 Feb 12 19:23:38.340412 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 19:23:38.340424 kernel: SELinux: policy capability always_check_network=0 Feb 12 19:23:38.340433 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 19:23:38.340442 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 19:23:38.340452 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 19:23:38.340461 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 19:23:38.340471 systemd[1]: Successfully loaded SELinux policy in 35.780ms. Feb 12 19:23:38.340486 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.949ms. Feb 12 19:23:38.340498 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:23:38.340510 systemd[1]: Detected virtualization kvm. Feb 12 19:23:38.340520 systemd[1]: Detected architecture arm64. Feb 12 19:23:38.340530 systemd[1]: Detected first boot. Feb 12 19:23:38.340543 systemd[1]: Initializing machine ID from VM UUID. Feb 12 19:23:38.340553 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 19:23:38.340563 systemd[1]: Populated /etc with preset unit settings. Feb 12 19:23:38.340573 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:23:38.340585 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:23:38.340598 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:23:38.340609 systemd[1]: Queued start job for default target multi-user.target. Feb 12 19:23:38.340620 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 12 19:23:38.340634 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 19:23:38.340646 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 19:23:38.340660 systemd[1]: Created slice system-getty.slice. Feb 12 19:23:38.340672 systemd[1]: Created slice system-modprobe.slice. Feb 12 19:23:38.340683 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 19:23:38.340694 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 19:23:38.340705 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 19:23:38.340715 systemd[1]: Created slice user.slice. Feb 12 19:23:38.340725 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:23:38.340735 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 19:23:38.340745 systemd[1]: Set up automount boot.automount. Feb 12 19:23:38.340755 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 19:23:38.340765 systemd[1]: Reached target integritysetup.target. Feb 12 19:23:38.340776 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:23:38.340794 systemd[1]: Reached target remote-fs.target. Feb 12 19:23:38.340804 systemd[1]: Reached target slices.target. Feb 12 19:23:38.340815 systemd[1]: Reached target swap.target. Feb 12 19:23:38.340825 systemd[1]: Reached target torcx.target. Feb 12 19:23:38.340835 systemd[1]: Reached target veritysetup.target. Feb 12 19:23:38.340845 systemd[1]: Listening on systemd-coredump.socket. Feb 12 19:23:38.340855 systemd[1]: Listening on systemd-initctl.socket. Feb 12 19:23:38.340867 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 19:23:38.340878 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 19:23:38.340889 systemd[1]: Listening on systemd-journald.socket. Feb 12 19:23:38.340899 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:23:38.340909 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:23:38.340919 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:23:38.340929 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 19:23:38.340941 systemd[1]: Mounting dev-hugepages.mount... Feb 12 19:23:38.340951 systemd[1]: Mounting dev-mqueue.mount... Feb 12 19:23:38.340961 systemd[1]: Mounting media.mount... Feb 12 19:23:38.340973 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 19:23:38.340983 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 19:23:38.340993 systemd[1]: Mounting tmp.mount... Feb 12 19:23:38.341003 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 19:23:38.341013 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 19:23:38.341023 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:23:38.341034 systemd[1]: Starting modprobe@configfs.service... Feb 12 19:23:38.341044 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 19:23:38.341054 systemd[1]: Starting modprobe@drm.service... Feb 12 19:23:38.341065 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 19:23:38.341075 systemd[1]: Starting modprobe@fuse.service... Feb 12 19:23:38.341085 systemd[1]: Starting modprobe@loop.service... Feb 12 19:23:38.341096 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 19:23:38.341107 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 12 19:23:38.341117 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 12 19:23:38.341127 systemd[1]: Starting systemd-journald.service... Feb 12 19:23:38.341137 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:23:38.341147 systemd[1]: Starting systemd-network-generator.service... Feb 12 19:23:38.341158 kernel: fuse: init (API version 7.34) Feb 12 19:23:38.341168 systemd[1]: Starting systemd-remount-fs.service... Feb 12 19:23:38.341179 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:23:38.341189 systemd[1]: Mounted dev-hugepages.mount. Feb 12 19:23:38.341199 systemd[1]: Mounted dev-mqueue.mount. Feb 12 19:23:38.341209 systemd[1]: Mounted media.mount. Feb 12 19:23:38.341219 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 19:23:38.341229 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 19:23:38.341239 kernel: loop: module loaded Feb 12 19:23:38.341250 systemd[1]: Mounted tmp.mount. Feb 12 19:23:38.341262 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:23:38.341273 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 19:23:38.341284 systemd[1]: Finished modprobe@configfs.service. Feb 12 19:23:38.341455 systemd-journald[1019]: Journal started Feb 12 19:23:38.341507 systemd-journald[1019]: Runtime Journal (/run/log/journal/98e53e4e6afb48d49022bbe541ceb23c) is 6.0M, max 48.7M, 42.6M free. Feb 12 19:23:38.249000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:23:38.249000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 12 19:23:38.333000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 19:23:38.333000 audit[1019]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffd5d2c7c0 a2=4000 a3=1 items=0 ppid=1 pid=1019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:38.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:38.333000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 19:23:38.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:38.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:38.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:38.343356 systemd[1]: Started systemd-journald.service. Feb 12 19:23:38.343986 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 19:23:38.346920 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 19:23:38.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:38.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:38.348089 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 19:23:38.348268 systemd[1]: Finished modprobe@drm.service. Feb 12 19:23:38.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:38.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:38.349273 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 19:23:38.349480 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 19:23:38.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:38.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:38.350564 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 19:23:38.350746 systemd[1]: Finished modprobe@fuse.service. Feb 12 19:23:38.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:38.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:38.351760 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 19:23:38.351959 systemd[1]: Finished modprobe@loop.service. Feb 12 19:23:38.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:38.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:38.353134 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:23:38.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:38.354366 systemd[1]: Finished systemd-network-generator.service. Feb 12 19:23:38.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:38.355616 systemd[1]: Finished systemd-remount-fs.service. Feb 12 19:23:38.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:38.356879 systemd[1]: Reached target network-pre.target. Feb 12 19:23:38.359347 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 19:23:38.361385 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 19:23:38.362151 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 19:23:38.363934 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 19:23:38.365797 systemd[1]: Starting systemd-journal-flush.service... Feb 12 19:23:38.366688 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 19:23:38.372891 systemd[1]: Starting systemd-random-seed.service... Feb 12 19:23:38.373697 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 19:23:38.374758 systemd-journald[1019]: Time spent on flushing to /var/log/journal/98e53e4e6afb48d49022bbe541ceb23c is 14.690ms for 939 entries. Feb 12 19:23:38.374758 systemd-journald[1019]: System Journal (/var/log/journal/98e53e4e6afb48d49022bbe541ceb23c) is 8.0M, max 195.6M, 187.6M free. Feb 12 19:23:38.403268 systemd-journald[1019]: Received client request to flush runtime journal. Feb 12 19:23:38.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:38.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:38.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:38.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:38.374923 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:23:38.379758 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 19:23:38.381316 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 19:23:38.382249 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 19:23:38.384621 systemd[1]: Starting systemd-sysusers.service... Feb 12 19:23:38.387688 systemd[1]: Finished systemd-random-seed.service. Feb 12 19:23:38.388740 systemd[1]: Reached target first-boot-complete.target. Feb 12 19:23:38.395549 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:23:38.399679 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:23:38.402382 systemd[1]: Starting systemd-udev-settle.service... Feb 12 19:23:38.408271 systemd[1]: Finished systemd-journal-flush.service. Feb 12 19:23:38.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:38.412633 udevadm[1080]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 12 19:23:38.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:38.416128 systemd[1]: Finished systemd-sysusers.service. Feb 12 19:23:38.418444 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 19:23:38.441584 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 19:23:38.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:38.765991 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 19:23:38.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:38.768229 systemd[1]: Starting systemd-udevd.service... Feb 12 19:23:38.789192 systemd-udevd[1087]: Using default interface naming scheme 'v252'. Feb 12 19:23:38.801245 systemd[1]: Started systemd-udevd.service. Feb 12 19:23:38.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:38.804032 systemd[1]: Starting systemd-networkd.service... Feb 12 19:23:38.815170 systemd[1]: Starting systemd-userdbd.service... Feb 12 19:23:38.835574 systemd[1]: Found device dev-ttyAMA0.device. Feb 12 19:23:38.872179 systemd[1]: Started systemd-userdbd.service. Feb 12 19:23:38.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:38.888154 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:23:38.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:38.923897 systemd[1]: Finished systemd-udev-settle.service. Feb 12 19:23:38.926283 systemd[1]: Starting lvm2-activation-early.service... Feb 12 19:23:38.965242 systemd-networkd[1096]: lo: Link UP Feb 12 19:23:38.965256 systemd-networkd[1096]: lo: Gained carrier Feb 12 19:23:38.965609 systemd-networkd[1096]: Enumeration completed Feb 12 19:23:38.965742 systemd[1]: Started systemd-networkd.service. Feb 12 19:23:38.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:38.966425 systemd-networkd[1096]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:23:38.974791 lvm[1121]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:23:38.976813 systemd-networkd[1096]: eth0: Link UP Feb 12 19:23:38.976823 systemd-networkd[1096]: eth0: Gained carrier Feb 12 19:23:38.993445 systemd-networkd[1096]: eth0: DHCPv4 address 10.0.0.89/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 12 19:23:39.013320 systemd[1]: Finished lvm2-activation-early.service. Feb 12 19:23:39.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:39.014117 systemd[1]: Reached target cryptsetup.target. Feb 12 19:23:39.016012 systemd[1]: Starting lvm2-activation.service... Feb 12 19:23:39.019901 lvm[1124]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:23:39.067268 systemd[1]: Finished lvm2-activation.service. Feb 12 19:23:39.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:39.068068 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:23:39.068767 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 19:23:39.068805 systemd[1]: Reached target local-fs.target. Feb 12 19:23:39.069413 systemd[1]: Reached target machines.target. Feb 12 19:23:39.071350 systemd[1]: Starting ldconfig.service... Feb 12 19:23:39.072232 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 19:23:39.072313 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:23:39.073713 systemd[1]: Starting systemd-boot-update.service... Feb 12 19:23:39.075712 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 19:23:39.077720 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 19:23:39.078815 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:23:39.078900 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:23:39.080221 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 19:23:39.081385 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1127 (bootctl) Feb 12 19:23:39.084321 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 19:23:39.089166 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 19:23:39.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:39.089892 systemd-tmpfiles[1130]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 19:23:39.091947 systemd-tmpfiles[1130]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 19:23:39.102094 systemd-tmpfiles[1130]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 19:23:39.176485 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 19:23:39.177311 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 19:23:39.178090 systemd-fsck[1136]: fsck.fat 4.2 (2021-01-31) Feb 12 19:23:39.178090 systemd-fsck[1136]: /dev/vda1: 236 files, 113719/258078 clusters Feb 12 19:23:39.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:39.183807 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 19:23:39.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:39.186690 systemd[1]: Mounting boot.mount... Feb 12 19:23:39.195583 systemd[1]: Mounted boot.mount. Feb 12 19:23:39.203333 systemd[1]: Finished systemd-boot-update.service. Feb 12 19:23:39.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:39.260722 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 19:23:39.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:39.263202 systemd[1]: Starting audit-rules.service... Feb 12 19:23:39.265619 systemd[1]: Starting clean-ca-certificates.service... Feb 12 19:23:39.268158 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 19:23:39.274006 systemd[1]: Starting systemd-resolved.service... Feb 12 19:23:39.276865 systemd[1]: Starting systemd-timesyncd.service... Feb 12 19:23:39.279235 systemd[1]: Starting systemd-update-utmp.service... Feb 12 19:23:39.279424 ldconfig[1126]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 19:23:39.280968 systemd[1]: Finished clean-ca-certificates.service. Feb 12 19:23:39.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:39.282497 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 19:23:39.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:39.288000 audit[1156]: SYSTEM_BOOT pid=1156 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 19:23:39.286114 systemd[1]: Finished ldconfig.service. Feb 12 19:23:39.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:39.291563 systemd[1]: Finished systemd-update-utmp.service. Feb 12 19:23:39.293447 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 19:23:39.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:39.296287 systemd[1]: Starting systemd-update-done.service... Feb 12 19:23:39.304283 systemd[1]: Finished systemd-update-done.service. Feb 12 19:23:39.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:39.322000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 19:23:39.322000 audit[1171]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffffa4129f0 a2=420 a3=0 items=0 ppid=1144 pid=1171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:39.322000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 19:23:39.323231 augenrules[1171]: No rules Feb 12 19:23:39.324365 systemd[1]: Finished audit-rules.service. Feb 12 19:23:39.351739 systemd[1]: Started systemd-timesyncd.service. Feb 12 19:23:39.352859 systemd-timesyncd[1155]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 12 19:23:39.352912 systemd-timesyncd[1155]: Initial clock synchronization to Mon 2024-02-12 19:23:39.688483 UTC. Feb 12 19:23:39.353040 systemd[1]: Reached target time-set.target. Feb 12 19:23:39.359495 systemd-resolved[1154]: Positive Trust Anchors: Feb 12 19:23:39.359507 systemd-resolved[1154]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:23:39.359533 systemd-resolved[1154]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:23:39.373448 systemd-resolved[1154]: Defaulting to hostname 'linux'. Feb 12 19:23:39.374850 systemd[1]: Started systemd-resolved.service. Feb 12 19:23:39.375571 systemd[1]: Reached target network.target. Feb 12 19:23:39.376155 systemd[1]: Reached target nss-lookup.target. Feb 12 19:23:39.376771 systemd[1]: Reached target sysinit.target. Feb 12 19:23:39.377409 systemd[1]: Started motdgen.path. Feb 12 19:23:39.377944 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 19:23:39.378892 systemd[1]: Started logrotate.timer. Feb 12 19:23:39.379728 systemd[1]: Started mdadm.timer. Feb 12 19:23:39.380414 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 19:23:39.381278 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 19:23:39.381327 systemd[1]: Reached target paths.target. Feb 12 19:23:39.382060 systemd[1]: Reached target timers.target. Feb 12 19:23:39.383175 systemd[1]: Listening on dbus.socket. Feb 12 19:23:39.385155 systemd[1]: Starting docker.socket... Feb 12 19:23:39.386821 systemd[1]: Listening on sshd.socket. Feb 12 19:23:39.387651 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:23:39.388008 systemd[1]: Listening on docker.socket. Feb 12 19:23:39.388791 systemd[1]: Reached target sockets.target. Feb 12 19:23:39.389544 systemd[1]: Reached target basic.target. Feb 12 19:23:39.390428 systemd[1]: System is tainted: cgroupsv1 Feb 12 19:23:39.390477 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:23:39.390497 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:23:39.391655 systemd[1]: Starting containerd.service... Feb 12 19:23:39.393641 systemd[1]: Starting dbus.service... Feb 12 19:23:39.395644 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 19:23:39.397911 systemd[1]: Starting extend-filesystems.service... Feb 12 19:23:39.398859 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 19:23:39.400478 systemd[1]: Starting motdgen.service... Feb 12 19:23:39.402746 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 19:23:39.404912 systemd[1]: Starting prepare-critools.service... Feb 12 19:23:39.406941 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 19:23:39.409170 systemd[1]: Starting sshd-keygen.service... Feb 12 19:23:39.409529 jq[1183]: false Feb 12 19:23:39.412649 systemd[1]: Starting systemd-logind.service... Feb 12 19:23:39.416327 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:23:39.416426 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 12 19:23:39.417855 systemd[1]: Starting update-engine.service... Feb 12 19:23:39.420047 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 19:23:39.422980 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 19:23:39.426120 jq[1201]: true Feb 12 19:23:39.423275 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 19:23:39.425370 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 19:23:39.425620 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 19:23:39.437443 jq[1209]: true Feb 12 19:23:39.445230 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 19:23:39.445505 systemd[1]: Finished motdgen.service. Feb 12 19:23:39.447355 tar[1203]: ./ Feb 12 19:23:39.447355 tar[1203]: ./macvlan Feb 12 19:23:39.451379 tar[1205]: crictl Feb 12 19:23:39.452964 dbus-daemon[1182]: [system] SELinux support is enabled Feb 12 19:23:39.453140 systemd[1]: Started dbus.service. Feb 12 19:23:39.456886 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 19:23:39.456950 systemd[1]: Reached target system-config.target. Feb 12 19:23:39.457750 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 19:23:39.457765 systemd[1]: Reached target user-config.target. Feb 12 19:23:39.465168 extend-filesystems[1184]: Found vda Feb 12 19:23:39.466545 extend-filesystems[1184]: Found vda1 Feb 12 19:23:39.466545 extend-filesystems[1184]: Found vda2 Feb 12 19:23:39.466545 extend-filesystems[1184]: Found vda3 Feb 12 19:23:39.466545 extend-filesystems[1184]: Found usr Feb 12 19:23:39.466545 extend-filesystems[1184]: Found vda4 Feb 12 19:23:39.466545 extend-filesystems[1184]: Found vda6 Feb 12 19:23:39.466545 extend-filesystems[1184]: Found vda7 Feb 12 19:23:39.466545 extend-filesystems[1184]: Found vda9 Feb 12 19:23:39.466545 extend-filesystems[1184]: Checking size of /dev/vda9 Feb 12 19:23:39.537434 systemd-logind[1196]: Watching system buttons on /dev/input/event0 (Power Button) Feb 12 19:23:39.538482 systemd-logind[1196]: New seat seat0. Feb 12 19:23:39.549953 extend-filesystems[1184]: Resized partition /dev/vda9 Feb 12 19:23:39.551486 systemd[1]: Started systemd-logind.service. Feb 12 19:23:39.571146 extend-filesystems[1243]: resize2fs 1.46.5 (30-Dec-2021) Feb 12 19:23:39.580421 bash[1234]: Updated "/home/core/.ssh/authorized_keys" Feb 12 19:23:39.586760 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 19:23:39.608317 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 12 19:23:39.609650 tar[1203]: ./static Feb 12 19:23:39.633333 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 12 19:23:39.646367 extend-filesystems[1243]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 12 19:23:39.646367 extend-filesystems[1243]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 12 19:23:39.646367 extend-filesystems[1243]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 12 19:23:39.651026 extend-filesystems[1184]: Resized filesystem in /dev/vda9 Feb 12 19:23:39.647698 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 19:23:39.656512 update_engine[1200]: I0212 19:23:39.654917 1200 main.cc:92] Flatcar Update Engine starting Feb 12 19:23:39.648005 systemd[1]: Finished extend-filesystems.service. Feb 12 19:23:39.657503 systemd[1]: Started update-engine.service. Feb 12 19:23:39.661059 update_engine[1200]: I0212 19:23:39.657543 1200 update_check_scheduler.cc:74] Next update check in 11m29s Feb 12 19:23:39.660077 systemd[1]: Started locksmithd.service. Feb 12 19:23:39.665069 env[1211]: time="2024-02-12T19:23:39.665005120Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 19:23:39.669379 tar[1203]: ./vlan Feb 12 19:23:39.689270 env[1211]: time="2024-02-12T19:23:39.689036200Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 19:23:39.689603 env[1211]: time="2024-02-12T19:23:39.689578760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:23:39.692615 env[1211]: time="2024-02-12T19:23:39.692536080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:23:39.692820 env[1211]: time="2024-02-12T19:23:39.692800560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:23:39.693392 env[1211]: time="2024-02-12T19:23:39.693363240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:23:39.693494 env[1211]: time="2024-02-12T19:23:39.693477680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 19:23:39.693595 env[1211]: time="2024-02-12T19:23:39.693578760Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 19:23:39.693656 env[1211]: time="2024-02-12T19:23:39.693641280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 19:23:39.693855 env[1211]: time="2024-02-12T19:23:39.693833760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:23:39.694221 env[1211]: time="2024-02-12T19:23:39.694200000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:23:39.694469 env[1211]: time="2024-02-12T19:23:39.694446080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:23:39.694541 env[1211]: time="2024-02-12T19:23:39.694526320Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 19:23:39.694666 env[1211]: time="2024-02-12T19:23:39.694648400Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 19:23:39.694732 env[1211]: time="2024-02-12T19:23:39.694717800Z" level=info msg="metadata content store policy set" policy=shared Feb 12 19:23:39.698322 env[1211]: time="2024-02-12T19:23:39.698298200Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 19:23:39.698433 env[1211]: time="2024-02-12T19:23:39.698416480Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 19:23:39.698556 env[1211]: time="2024-02-12T19:23:39.698540600Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 19:23:39.698641 env[1211]: time="2024-02-12T19:23:39.698626120Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 19:23:39.698703 env[1211]: time="2024-02-12T19:23:39.698688720Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 19:23:39.698761 env[1211]: time="2024-02-12T19:23:39.698748120Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 19:23:39.698893 env[1211]: time="2024-02-12T19:23:39.698874640Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 19:23:39.699390 env[1211]: time="2024-02-12T19:23:39.699362440Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 19:23:39.699554 env[1211]: time="2024-02-12T19:23:39.699536440Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 19:23:39.699623 env[1211]: time="2024-02-12T19:23:39.699608880Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 19:23:39.699685 env[1211]: time="2024-02-12T19:23:39.699669800Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 19:23:39.699829 env[1211]: time="2024-02-12T19:23:39.699811840Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 19:23:39.699989 env[1211]: time="2024-02-12T19:23:39.699970800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 19:23:39.700203 env[1211]: time="2024-02-12T19:23:39.700185120Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 19:23:39.700814 env[1211]: time="2024-02-12T19:23:39.700790480Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 19:23:39.700983 env[1211]: time="2024-02-12T19:23:39.700965320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 19:23:39.701052 env[1211]: time="2024-02-12T19:23:39.701038000Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 19:23:39.701267 env[1211]: time="2024-02-12T19:23:39.701250400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 19:23:39.701352 env[1211]: time="2024-02-12T19:23:39.701337560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 19:23:39.701410 env[1211]: time="2024-02-12T19:23:39.701397840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 19:23:39.701478 env[1211]: time="2024-02-12T19:23:39.701464440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 19:23:39.701542 env[1211]: time="2024-02-12T19:23:39.701528240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 19:23:39.701599 env[1211]: time="2024-02-12T19:23:39.701585600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 19:23:39.701661 env[1211]: time="2024-02-12T19:23:39.701646920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 19:23:39.701717 env[1211]: time="2024-02-12T19:23:39.701704080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 19:23:39.701804 env[1211]: time="2024-02-12T19:23:39.701787680Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 19:23:39.701989 env[1211]: time="2024-02-12T19:23:39.701970800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 19:23:39.702066 env[1211]: time="2024-02-12T19:23:39.702051840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 19:23:39.702125 env[1211]: time="2024-02-12T19:23:39.702111920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 19:23:39.702188 env[1211]: time="2024-02-12T19:23:39.702174320Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 19:23:39.702265 env[1211]: time="2024-02-12T19:23:39.702250400Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 19:23:39.702374 env[1211]: time="2024-02-12T19:23:39.702359960Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 19:23:39.702439 env[1211]: time="2024-02-12T19:23:39.702425240Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 19:23:39.702621 env[1211]: time="2024-02-12T19:23:39.702599320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 19:23:39.702926 env[1211]: time="2024-02-12T19:23:39.702871240Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 19:23:39.705371 env[1211]: time="2024-02-12T19:23:39.703269920Z" level=info msg="Connect containerd service" Feb 12 19:23:39.705371 env[1211]: time="2024-02-12T19:23:39.703329160Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 19:23:39.706220 env[1211]: time="2024-02-12T19:23:39.706193520Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:23:39.707031 env[1211]: time="2024-02-12T19:23:39.707009000Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 19:23:39.707208 env[1211]: time="2024-02-12T19:23:39.707192600Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 19:23:39.707405 systemd[1]: Started containerd.service. Feb 12 19:23:39.708596 env[1211]: time="2024-02-12T19:23:39.708577040Z" level=info msg="containerd successfully booted in 0.045034s" Feb 12 19:23:39.712686 env[1211]: time="2024-02-12T19:23:39.712647960Z" level=info msg="Start subscribing containerd event" Feb 12 19:23:39.712752 env[1211]: time="2024-02-12T19:23:39.712707960Z" level=info msg="Start recovering state" Feb 12 19:23:39.712802 env[1211]: time="2024-02-12T19:23:39.712784240Z" level=info msg="Start event monitor" Feb 12 19:23:39.712831 env[1211]: time="2024-02-12T19:23:39.712810800Z" level=info msg="Start snapshots syncer" Feb 12 19:23:39.712831 env[1211]: time="2024-02-12T19:23:39.712821720Z" level=info msg="Start cni network conf syncer for default" Feb 12 19:23:39.712831 env[1211]: time="2024-02-12T19:23:39.712828640Z" level=info msg="Start streaming server" Feb 12 19:23:39.721724 tar[1203]: ./portmap Feb 12 19:23:39.751945 tar[1203]: ./host-local Feb 12 19:23:39.766329 locksmithd[1250]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 19:23:39.779218 tar[1203]: ./vrf Feb 12 19:23:39.809018 tar[1203]: ./bridge Feb 12 19:23:39.842886 tar[1203]: ./tuning Feb 12 19:23:39.870832 tar[1203]: ./firewall Feb 12 19:23:39.905851 tar[1203]: ./host-device Feb 12 19:23:39.937324 tar[1203]: ./sbr Feb 12 19:23:39.948635 systemd[1]: Finished prepare-critools.service. Feb 12 19:23:39.965737 tar[1203]: ./loopback Feb 12 19:23:39.988905 tar[1203]: ./dhcp Feb 12 19:23:40.055528 tar[1203]: ./ptp Feb 12 19:23:40.085101 tar[1203]: ./ipvlan Feb 12 19:23:40.113691 tar[1203]: ./bandwidth Feb 12 19:23:40.152455 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 19:23:40.751497 systemd-networkd[1096]: eth0: Gained IPv6LL Feb 12 19:23:41.140961 sshd_keygen[1214]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 19:23:41.162945 systemd[1]: Finished sshd-keygen.service. Feb 12 19:23:41.165402 systemd[1]: Starting issuegen.service... Feb 12 19:23:41.170312 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 19:23:41.170565 systemd[1]: Finished issuegen.service. Feb 12 19:23:41.173111 systemd[1]: Starting systemd-user-sessions.service... Feb 12 19:23:41.179824 systemd[1]: Finished systemd-user-sessions.service. Feb 12 19:23:41.182516 systemd[1]: Started getty@tty1.service. Feb 12 19:23:41.184792 systemd[1]: Started serial-getty@ttyAMA0.service. Feb 12 19:23:41.185969 systemd[1]: Reached target getty.target. Feb 12 19:23:41.186672 systemd[1]: Reached target multi-user.target. Feb 12 19:23:41.188669 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 19:23:41.195573 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 19:23:41.195797 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 19:23:41.196953 systemd[1]: Startup finished in 6.272s (kernel) + 5.083s (userspace) = 11.356s. Feb 12 19:23:43.343088 systemd[1]: Created slice system-sshd.slice. Feb 12 19:23:43.344292 systemd[1]: Started sshd@0-10.0.0.89:22-10.0.0.1:59002.service. Feb 12 19:23:43.400193 sshd[1284]: Accepted publickey for core from 10.0.0.1 port 59002 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:23:43.402340 sshd[1284]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:23:43.415892 systemd-logind[1196]: New session 1 of user core. Feb 12 19:23:43.417134 systemd[1]: Created slice user-500.slice. Feb 12 19:23:43.418465 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 19:23:43.427739 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 19:23:43.429258 systemd[1]: Starting user@500.service... Feb 12 19:23:43.432431 (systemd)[1289]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:23:43.499609 systemd[1289]: Queued start job for default target default.target. Feb 12 19:23:43.499857 systemd[1289]: Reached target paths.target. Feb 12 19:23:43.499872 systemd[1289]: Reached target sockets.target. Feb 12 19:23:43.499884 systemd[1289]: Reached target timers.target. Feb 12 19:23:43.499907 systemd[1289]: Reached target basic.target. Feb 12 19:23:43.499955 systemd[1289]: Reached target default.target. Feb 12 19:23:43.499978 systemd[1289]: Startup finished in 61ms. Feb 12 19:23:43.500101 systemd[1]: Started user@500.service. Feb 12 19:23:43.501081 systemd[1]: Started session-1.scope. Feb 12 19:23:43.577730 systemd[1]: Started sshd@1-10.0.0.89:22-10.0.0.1:59012.service. Feb 12 19:23:43.627156 sshd[1298]: Accepted publickey for core from 10.0.0.1 port 59012 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:23:43.628652 sshd[1298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:23:43.635716 systemd-logind[1196]: New session 2 of user core. Feb 12 19:23:43.636545 systemd[1]: Started session-2.scope. Feb 12 19:23:43.698372 sshd[1298]: pam_unix(sshd:session): session closed for user core Feb 12 19:23:43.701073 systemd[1]: Started sshd@2-10.0.0.89:22-10.0.0.1:59018.service. Feb 12 19:23:43.704054 systemd[1]: sshd@1-10.0.0.89:22-10.0.0.1:59012.service: Deactivated successfully. Feb 12 19:23:43.705476 systemd-logind[1196]: Session 2 logged out. Waiting for processes to exit. Feb 12 19:23:43.705481 systemd[1]: session-2.scope: Deactivated successfully. Feb 12 19:23:43.708792 systemd-logind[1196]: Removed session 2. Feb 12 19:23:43.745516 sshd[1303]: Accepted publickey for core from 10.0.0.1 port 59018 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:23:43.747377 sshd[1303]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:23:43.753965 systemd-logind[1196]: New session 3 of user core. Feb 12 19:23:43.754901 systemd[1]: Started session-3.scope. Feb 12 19:23:43.817747 sshd[1303]: pam_unix(sshd:session): session closed for user core Feb 12 19:23:43.819693 systemd[1]: Started sshd@3-10.0.0.89:22-10.0.0.1:59032.service. Feb 12 19:23:43.824862 systemd[1]: sshd@2-10.0.0.89:22-10.0.0.1:59018.service: Deactivated successfully. Feb 12 19:23:43.825998 systemd-logind[1196]: Session 3 logged out. Waiting for processes to exit. Feb 12 19:23:43.826008 systemd[1]: session-3.scope: Deactivated successfully. Feb 12 19:23:43.827101 systemd-logind[1196]: Removed session 3. Feb 12 19:23:43.869467 sshd[1310]: Accepted publickey for core from 10.0.0.1 port 59032 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:23:43.871123 sshd[1310]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:23:43.878586 systemd-logind[1196]: New session 4 of user core. Feb 12 19:23:43.879689 systemd[1]: Started session-4.scope. Feb 12 19:23:43.935835 sshd[1310]: pam_unix(sshd:session): session closed for user core Feb 12 19:23:43.938296 systemd[1]: Started sshd@4-10.0.0.89:22-10.0.0.1:59044.service. Feb 12 19:23:43.938758 systemd[1]: sshd@3-10.0.0.89:22-10.0.0.1:59032.service: Deactivated successfully. Feb 12 19:23:43.939728 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 19:23:43.940722 systemd-logind[1196]: Session 4 logged out. Waiting for processes to exit. Feb 12 19:23:43.941681 systemd-logind[1196]: Removed session 4. Feb 12 19:23:43.979828 sshd[1318]: Accepted publickey for core from 10.0.0.1 port 59044 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:23:43.981360 sshd[1318]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:23:43.986821 systemd-logind[1196]: New session 5 of user core. Feb 12 19:23:43.989217 systemd[1]: Started session-5.scope. Feb 12 19:23:44.073216 sudo[1323]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 12 19:23:44.073773 sudo[1323]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 19:23:44.085487 dbus-daemon[1182]: avc: received setenforce notice (enforcing=1) Feb 12 19:23:44.086557 sudo[1323]: pam_unix(sudo:session): session closed for user root Feb 12 19:23:44.089004 sshd[1318]: pam_unix(sshd:session): session closed for user core Feb 12 19:23:44.091632 systemd[1]: Started sshd@5-10.0.0.89:22-10.0.0.1:59050.service. Feb 12 19:23:44.092765 systemd[1]: sshd@4-10.0.0.89:22-10.0.0.1:59044.service: Deactivated successfully. Feb 12 19:23:44.093926 systemd-logind[1196]: Session 5 logged out. Waiting for processes to exit. Feb 12 19:23:44.093960 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 19:23:44.094774 systemd-logind[1196]: Removed session 5. Feb 12 19:23:44.136419 sshd[1325]: Accepted publickey for core from 10.0.0.1 port 59050 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:23:44.137835 sshd[1325]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:23:44.142914 systemd-logind[1196]: New session 6 of user core. Feb 12 19:23:44.143764 systemd[1]: Started session-6.scope. Feb 12 19:23:44.198930 sudo[1332]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 12 19:23:44.199175 sudo[1332]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 19:23:44.202206 sudo[1332]: pam_unix(sudo:session): session closed for user root Feb 12 19:23:44.208947 sudo[1331]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 12 19:23:44.209417 sudo[1331]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 19:23:44.220400 systemd[1]: Stopping audit-rules.service... Feb 12 19:23:44.220000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 12 19:23:44.222513 kernel: kauditd_printk_skb: 97 callbacks suppressed Feb 12 19:23:44.222581 kernel: audit: type=1305 audit(1707765824.220:130): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 12 19:23:44.222976 auditctl[1335]: No rules Feb 12 19:23:44.223211 systemd[1]: audit-rules.service: Deactivated successfully. Feb 12 19:23:44.223485 systemd[1]: Stopped audit-rules.service. Feb 12 19:23:44.227340 kernel: audit: type=1300 audit(1707765824.220:130): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffcb783240 a2=420 a3=0 items=0 ppid=1 pid=1335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:44.227413 kernel: audit: type=1327 audit(1707765824.220:130): proctitle=2F7362696E2F617564697463746C002D44 Feb 12 19:23:44.227449 kernel: audit: type=1131 audit(1707765824.221:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:44.220000 audit[1335]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffcb783240 a2=420 a3=0 items=0 ppid=1 pid=1335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:44.220000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Feb 12 19:23:44.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:44.225215 systemd[1]: Starting audit-rules.service... Feb 12 19:23:44.243640 augenrules[1353]: No rules Feb 12 19:23:44.244495 systemd[1]: Finished audit-rules.service. Feb 12 19:23:44.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:44.245494 sudo[1331]: pam_unix(sudo:session): session closed for user root Feb 12 19:23:44.244000 audit[1331]: USER_END pid=1331 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:23:44.249546 kernel: audit: type=1130 audit(1707765824.244:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:44.249597 kernel: audit: type=1106 audit(1707765824.244:133): pid=1331 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:23:44.249706 sshd[1325]: pam_unix(sshd:session): session closed for user core Feb 12 19:23:44.244000 audit[1331]: CRED_DISP pid=1331 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:23:44.252125 kernel: audit: type=1104 audit(1707765824.244:134): pid=1331 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:23:44.257540 systemd[1]: Started sshd@6-10.0.0.89:22-10.0.0.1:59056.service. Feb 12 19:23:44.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.89:22-10.0.0.1:59056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:44.259000 audit[1325]: USER_END pid=1325 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:23:44.262695 systemd-logind[1196]: Session 6 logged out. Waiting for processes to exit. Feb 12 19:23:44.262707 systemd[1]: sshd@5-10.0.0.89:22-10.0.0.1:59050.service: Deactivated successfully. Feb 12 19:23:44.263600 systemd[1]: session-6.scope: Deactivated successfully. Feb 12 19:23:44.264664 kernel: audit: type=1130 audit(1707765824.256:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.89:22-10.0.0.1:59056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:44.264744 kernel: audit: type=1106 audit(1707765824.259:136): pid=1325 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:23:44.264768 kernel: audit: type=1104 audit(1707765824.259:137): pid=1325 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:23:44.259000 audit[1325]: CRED_DISP pid=1325 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:23:44.264974 systemd-logind[1196]: Removed session 6. Feb 12 19:23:44.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.89:22-10.0.0.1:59050 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:44.301000 audit[1358]: USER_ACCT pid=1358 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:23:44.302872 sshd[1358]: Accepted publickey for core from 10.0.0.1 port 59056 ssh2: RSA SHA256:0q7ITIIsVkfhf6t5T8C/3bWLc/a3iVJf1KwyHhJJ+LU Feb 12 19:23:44.302000 audit[1358]: CRED_ACQ pid=1358 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:23:44.302000 audit[1358]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffff14980 a2=3 a3=1 items=0 ppid=1 pid=1358 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:44.302000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 19:23:44.304279 sshd[1358]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:23:44.307907 systemd-logind[1196]: New session 7 of user core. Feb 12 19:23:44.308737 systemd[1]: Started session-7.scope. Feb 12 19:23:44.311000 audit[1358]: USER_START pid=1358 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:23:44.312000 audit[1363]: CRED_ACQ pid=1363 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:23:44.364000 audit[1364]: USER_ACCT pid=1364 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:23:44.364603 sudo[1364]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 19:23:44.364000 audit[1364]: CRED_REFR pid=1364 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:23:44.364816 sudo[1364]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 19:23:44.366000 audit[1364]: USER_START pid=1364 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:23:44.909094 systemd[1]: Reloading. Feb 12 19:23:44.984993 /usr/lib/systemd/system-generators/torcx-generator[1394]: time="2024-02-12T19:23:44Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:23:44.985024 /usr/lib/systemd/system-generators/torcx-generator[1394]: time="2024-02-12T19:23:44Z" level=info msg="torcx already run" Feb 12 19:23:45.063663 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:23:45.063687 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:23:45.081093 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:23:45.145326 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 19:23:45.151684 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 19:23:45.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:45.152127 systemd[1]: Reached target network-online.target. Feb 12 19:23:45.153807 systemd[1]: Started kubelet.service. Feb 12 19:23:45.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:45.166371 systemd[1]: Starting coreos-metadata.service... Feb 12 19:23:45.175264 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 12 19:23:45.175688 systemd[1]: Finished coreos-metadata.service. Feb 12 19:23:45.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:45.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:45.359744 kubelet[1438]: E0212 19:23:45.359675 1438 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 19:23:45.361702 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:23:45.361858 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:23:45.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 12 19:23:45.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:45.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:45.536574 systemd[1]: Stopped kubelet.service. Feb 12 19:23:45.553001 systemd[1]: Reloading. Feb 12 19:23:45.602684 /usr/lib/systemd/system-generators/torcx-generator[1509]: time="2024-02-12T19:23:45Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:23:45.602717 /usr/lib/systemd/system-generators/torcx-generator[1509]: time="2024-02-12T19:23:45Z" level=info msg="torcx already run" Feb 12 19:23:45.667408 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:23:45.667561 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:23:45.683958 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:23:45.751984 systemd[1]: Started kubelet.service. Feb 12 19:23:45.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:45.795074 kubelet[1553]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:23:45.795074 kubelet[1553]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:23:45.795423 kubelet[1553]: I0212 19:23:45.795204 1553 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 19:23:45.796915 kubelet[1553]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:23:45.796915 kubelet[1553]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:23:46.761203 kubelet[1553]: I0212 19:23:46.761157 1553 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 12 19:23:46.761203 kubelet[1553]: I0212 19:23:46.761191 1553 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 19:23:46.761436 kubelet[1553]: I0212 19:23:46.761423 1553 server.go:836] "Client rotation is on, will bootstrap in background" Feb 12 19:23:46.765600 kubelet[1553]: I0212 19:23:46.765526 1553 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:23:46.767637 kubelet[1553]: W0212 19:23:46.767618 1553 machine.go:65] Cannot read vendor id correctly, set empty. Feb 12 19:23:46.769800 kubelet[1553]: I0212 19:23:46.769773 1553 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 19:23:46.770544 kubelet[1553]: I0212 19:23:46.770527 1553 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 19:23:46.770703 kubelet[1553]: I0212 19:23:46.770690 1553 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 19:23:46.770998 kubelet[1553]: I0212 19:23:46.770971 1553 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 19:23:46.771066 kubelet[1553]: I0212 19:23:46.771004 1553 container_manager_linux.go:308] "Creating device plugin manager" Feb 12 19:23:46.771358 kubelet[1553]: I0212 19:23:46.771319 1553 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:23:46.776063 kubelet[1553]: I0212 19:23:46.776035 1553 kubelet.go:398] "Attempting to sync node with API server" Feb 12 19:23:46.776063 kubelet[1553]: I0212 19:23:46.776067 1553 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 19:23:46.776280 kubelet[1553]: I0212 19:23:46.776271 1553 kubelet.go:297] "Adding apiserver pod source" Feb 12 19:23:46.776328 kubelet[1553]: I0212 19:23:46.776288 1553 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 19:23:46.776377 kubelet[1553]: E0212 19:23:46.776363 1553 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:46.776413 kubelet[1553]: E0212 19:23:46.776399 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:46.777563 kubelet[1553]: I0212 19:23:46.777545 1553 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 19:23:46.778413 kubelet[1553]: W0212 19:23:46.778389 1553 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 19:23:46.778926 kubelet[1553]: I0212 19:23:46.778896 1553 server.go:1186] "Started kubelet" Feb 12 19:23:46.779195 kubelet[1553]: I0212 19:23:46.779168 1553 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 19:23:46.779809 kubelet[1553]: E0212 19:23:46.779793 1553 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 19:23:46.779850 kubelet[1553]: E0212 19:23:46.779818 1553 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 19:23:46.780406 kubelet[1553]: I0212 19:23:46.780388 1553 server.go:451] "Adding debug handlers to kubelet server" Feb 12 19:23:46.780000 audit[1553]: AVC avc: denied { mac_admin } for pid=1553 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:23:46.780000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 12 19:23:46.780000 audit[1553]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40010173b0 a1=40000f54d0 a2=4001017380 a3=25 items=0 ppid=1 pid=1553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:46.780000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 12 19:23:46.780000 audit[1553]: AVC avc: denied { mac_admin } for pid=1553 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:23:46.780000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 12 19:23:46.780000 audit[1553]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000516f20 a1=40000f54e8 a2=4001017890 a3=25 items=0 ppid=1 pid=1553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:46.780000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 12 19:23:46.781948 kubelet[1553]: I0212 19:23:46.781735 1553 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 12 19:23:46.781948 kubelet[1553]: I0212 19:23:46.781783 1553 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 12 19:23:46.781948 kubelet[1553]: I0212 19:23:46.781843 1553 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 19:23:46.782469 kubelet[1553]: I0212 19:23:46.782454 1553 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 12 19:23:46.782562 kubelet[1553]: I0212 19:23:46.782550 1553 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 19:23:46.802819 kubelet[1553]: E0212 19:23:46.802765 1553 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "10.0.0.89" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 19:23:46.803169 kubelet[1553]: W0212 19:23:46.802906 1553 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:23:46.803169 kubelet[1553]: E0212 19:23:46.802936 1553 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:23:46.803245 kubelet[1553]: W0212 19:23:46.803115 1553 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:23:46.803245 kubelet[1553]: E0212 19:23:46.803192 1553 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:23:46.803360 kubelet[1553]: E0212 19:23:46.803231 1553 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89.17b333f8a8b4a894", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.89", UID:"10.0.0.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.89"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 46, 778867860, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 46, 778867860, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:46.804499 kubelet[1553]: W0212 19:23:46.804476 1553 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.89" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:23:46.805142 kubelet[1553]: E0212 19:23:46.805109 1553 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.89" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:23:46.805818 kubelet[1553]: E0212 19:23:46.805744 1553 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89.17b333f8a8c30734", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.89", UID:"10.0.0.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.89"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 46, 779809588, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 46, 779809588, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:46.806000 audit[1567]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=1567 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:46.806000 audit[1567]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffca332700 a2=0 a3=1 items=0 ppid=1553 pid=1567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:46.806000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 12 19:23:46.807000 audit[1573]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1573 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:46.807000 audit[1573]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=132 a0=3 a1=ffffc266e5e0 a2=0 a3=1 items=0 ppid=1553 pid=1573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:46.807000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 12 19:23:46.820566 kubelet[1553]: I0212 19:23:46.820546 1553 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 19:23:46.820712 kubelet[1553]: I0212 19:23:46.820701 1553 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 19:23:46.820770 kubelet[1553]: I0212 19:23:46.820762 1553 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:23:46.821401 kubelet[1553]: E0212 19:23:46.821319 1553 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89.17b333f8ab269e3d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.89", UID:"10.0.0.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.89 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.89"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 46, 819890749, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 46, 819890749, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:46.822927 kubelet[1553]: E0212 19:23:46.822848 1553 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89.17b333f8ab26d08b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.89", UID:"10.0.0.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.89 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.89"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 46, 819903627, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 46, 819903627, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:46.823855 kubelet[1553]: E0212 19:23:46.823789 1553 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89.17b333f8ab26df07", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.89", UID:"10.0.0.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.89 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.89"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 46, 819907335, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 46, 819907335, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:46.824888 kubelet[1553]: I0212 19:23:46.824869 1553 policy_none.go:49] "None policy: Start" Feb 12 19:23:46.825652 kubelet[1553]: I0212 19:23:46.825631 1553 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 19:23:46.825718 kubelet[1553]: I0212 19:23:46.825659 1553 state_mem.go:35] "Initializing new in-memory state store" Feb 12 19:23:46.810000 audit[1575]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=1575 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:46.810000 audit[1575]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc94827d0 a2=0 a3=1 items=0 ppid=1553 pid=1575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:46.810000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 12 19:23:46.828000 audit[1580]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=1580 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:46.828000 audit[1580]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffd3fde510 a2=0 a3=1 items=0 ppid=1553 pid=1580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:46.828000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 12 19:23:46.835006 kubelet[1553]: I0212 19:23:46.834963 1553 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 19:23:46.833000 audit[1553]: AVC avc: denied { mac_admin } for pid=1553 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:23:46.833000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 12 19:23:46.833000 audit[1553]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000e8f2f0 a1=4001091488 a2=4000e8f2c0 a3=25 items=0 ppid=1 pid=1553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:46.833000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 12 19:23:46.835277 kubelet[1553]: I0212 19:23:46.835151 1553 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 12 19:23:46.835739 kubelet[1553]: I0212 19:23:46.835704 1553 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 19:23:46.836366 kubelet[1553]: E0212 19:23:46.836346 1553 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.89\" not found" Feb 12 19:23:46.836527 kubelet[1553]: E0212 19:23:46.836381 1553 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89.17b333f8ac042e15", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.89", UID:"10.0.0.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.89"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 46, 834411029, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 46, 834411029, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:46.864000 audit[1586]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1586 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:46.864000 audit[1586]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffed5bcd70 a2=0 a3=1 items=0 ppid=1553 pid=1586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:46.864000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Feb 12 19:23:46.865000 audit[1587]: NETFILTER_CFG table=nat:7 family=2 entries=2 op=nft_register_chain pid=1587 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:46.865000 audit[1587]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffd69d5a00 a2=0 a3=1 items=0 ppid=1553 pid=1587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:46.865000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 12 19:23:46.874000 audit[1590]: NETFILTER_CFG table=nat:8 family=2 entries=1 op=nft_register_rule pid=1590 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:46.874000 audit[1590]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=fffff278fa50 a2=0 a3=1 items=0 ppid=1553 pid=1590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:46.874000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 12 19:23:46.877000 audit[1593]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1593 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:46.877000 audit[1593]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=ffffcb954520 a2=0 a3=1 items=0 ppid=1553 pid=1593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:46.877000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 12 19:23:46.879000 audit[1594]: NETFILTER_CFG table=nat:10 family=2 entries=1 op=nft_register_chain pid=1594 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:46.879000 audit[1594]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc64084e0 a2=0 a3=1 items=0 ppid=1553 pid=1594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:46.879000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 12 19:23:46.879000 audit[1595]: NETFILTER_CFG table=nat:11 family=2 entries=1 op=nft_register_chain pid=1595 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:46.879000 audit[1595]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdf6b34b0 a2=0 a3=1 items=0 ppid=1553 pid=1595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:46.879000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 12 19:23:46.883014 kubelet[1553]: I0212 19:23:46.882991 1553 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.89" Feb 12 19:23:46.883000 audit[1597]: NETFILTER_CFG table=nat:12 family=2 entries=1 op=nft_register_rule pid=1597 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:46.883000 audit[1597]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffe2e9df30 a2=0 a3=1 items=0 ppid=1553 pid=1597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:46.883000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 12 19:23:46.886107 kubelet[1553]: E0212 19:23:46.886073 1553 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.89" Feb 12 19:23:46.886194 kubelet[1553]: E0212 19:23:46.886120 1553 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89.17b333f8ab269e3d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.89", UID:"10.0.0.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.89 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.89"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 46, 819890749, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 46, 882946830, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.89.17b333f8ab269e3d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:46.887133 kubelet[1553]: E0212 19:23:46.887059 1553 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89.17b333f8ab26d08b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.89", UID:"10.0.0.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.89 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.89"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 46, 819903627, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 46, 882958444, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.89.17b333f8ab26d08b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:46.888008 kubelet[1553]: E0212 19:23:46.887947 1553 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89.17b333f8ab26df07", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.89", UID:"10.0.0.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.89 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.89"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 46, 819907335, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 46, 882961623, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.89.17b333f8ab26df07" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:46.885000 audit[1599]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1599 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:46.885000 audit[1599]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=fffffec84a00 a2=0 a3=1 items=0 ppid=1553 pid=1599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:46.885000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 12 19:23:46.906000 audit[1602]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1602 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:46.906000 audit[1602]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=fffff9eadc10 a2=0 a3=1 items=0 ppid=1553 pid=1602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:46.906000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 12 19:23:46.908000 audit[1604]: NETFILTER_CFG table=nat:15 family=2 entries=1 op=nft_register_rule pid=1604 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:46.908000 audit[1604]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=ffffe453b490 a2=0 a3=1 items=0 ppid=1553 pid=1604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:46.908000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 12 19:23:46.913000 audit[1607]: NETFILTER_CFG table=nat:16 family=2 entries=1 op=nft_register_rule pid=1607 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:46.913000 audit[1607]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=540 a0=3 a1=fffffc0333a0 a2=0 a3=1 items=0 ppid=1553 pid=1607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:46.913000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 12 19:23:46.915506 kubelet[1553]: I0212 19:23:46.915452 1553 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 19:23:46.914000 audit[1608]: NETFILTER_CFG table=mangle:17 family=10 entries=2 op=nft_register_chain pid=1608 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:46.914000 audit[1608]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff85e2bb0 a2=0 a3=1 items=0 ppid=1553 pid=1608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:46.914000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 12 19:23:46.915000 audit[1609]: NETFILTER_CFG table=mangle:18 family=2 entries=1 op=nft_register_chain pid=1609 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:46.915000 audit[1609]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe82bac30 a2=0 a3=1 items=0 ppid=1553 pid=1609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:46.915000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 12 19:23:46.915000 audit[1610]: NETFILTER_CFG table=nat:19 family=10 entries=2 op=nft_register_chain pid=1610 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:46.915000 audit[1610]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffccdd4200 a2=0 a3=1 items=0 ppid=1553 pid=1610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:46.915000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 12 19:23:46.916000 audit[1612]: NETFILTER_CFG table=nat:20 family=2 entries=1 op=nft_register_chain pid=1612 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:46.916000 audit[1612]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd6ca1df0 a2=0 a3=1 items=0 ppid=1553 pid=1612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:46.916000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 12 19:23:46.917000 audit[1614]: NETFILTER_CFG table=nat:21 family=10 entries=1 op=nft_register_rule pid=1614 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:46.917000 audit[1614]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=fffffe7bbdb0 a2=0 a3=1 items=0 ppid=1553 pid=1614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:46.917000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 12 19:23:46.917000 audit[1613]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_chain pid=1613 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:23:46.917000 audit[1613]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffef59ded0 a2=0 a3=1 items=0 ppid=1553 pid=1613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:46.917000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 12 19:23:46.919000 audit[1615]: NETFILTER_CFG table=filter:23 family=10 entries=2 op=nft_register_chain pid=1615 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:46.919000 audit[1615]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=132 a0=3 a1=fffffb89d7a0 a2=0 a3=1 items=0 ppid=1553 pid=1615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:46.919000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 12 19:23:46.921000 audit[1617]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_rule pid=1617 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:46.921000 audit[1617]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=ffffe0d58d20 a2=0 a3=1 items=0 ppid=1553 pid=1617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:46.921000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 12 19:23:46.922000 audit[1618]: NETFILTER_CFG table=nat:25 family=10 entries=1 op=nft_register_chain pid=1618 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:46.922000 audit[1618]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff5d8d280 a2=0 a3=1 items=0 ppid=1553 pid=1618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:46.922000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 12 19:23:46.923000 audit[1619]: NETFILTER_CFG table=nat:26 family=10 entries=1 op=nft_register_chain pid=1619 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:46.923000 audit[1619]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd353f040 a2=0 a3=1 items=0 ppid=1553 pid=1619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:46.923000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 12 19:23:46.925000 audit[1621]: NETFILTER_CFG table=nat:27 family=10 entries=1 op=nft_register_rule pid=1621 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:46.925000 audit[1621]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffc9bad420 a2=0 a3=1 items=0 ppid=1553 pid=1621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:46.925000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 12 19:23:46.927000 audit[1623]: NETFILTER_CFG table=nat:28 family=10 entries=2 op=nft_register_chain pid=1623 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:46.927000 audit[1623]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=fffffbe373f0 a2=0 a3=1 items=0 ppid=1553 pid=1623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:46.927000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 12 19:23:46.929000 audit[1625]: NETFILTER_CFG table=nat:29 family=10 entries=1 op=nft_register_rule pid=1625 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:46.929000 audit[1625]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=ffffe9cc6110 a2=0 a3=1 items=0 ppid=1553 pid=1625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:46.929000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 12 19:23:46.930000 audit[1627]: NETFILTER_CFG table=nat:30 family=10 entries=1 op=nft_register_rule pid=1627 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:46.930000 audit[1627]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=ffffec570760 a2=0 a3=1 items=0 ppid=1553 pid=1627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:46.930000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 12 19:23:46.933000 audit[1629]: NETFILTER_CFG table=nat:31 family=10 entries=1 op=nft_register_rule pid=1629 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:46.933000 audit[1629]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=556 a0=3 a1=fffff1e43840 a2=0 a3=1 items=0 ppid=1553 pid=1629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:46.933000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 12 19:23:46.935246 kubelet[1553]: I0212 19:23:46.935224 1553 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 19:23:46.935460 kubelet[1553]: I0212 19:23:46.935436 1553 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 12 19:23:46.935504 kubelet[1553]: I0212 19:23:46.935468 1553 kubelet.go:2113] "Starting kubelet main sync loop" Feb 12 19:23:46.935528 kubelet[1553]: E0212 19:23:46.935516 1553 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 12 19:23:46.936906 kubelet[1553]: W0212 19:23:46.936883 1553 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:23:46.936981 kubelet[1553]: E0212 19:23:46.936915 1553 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:23:46.935000 audit[1630]: NETFILTER_CFG table=mangle:32 family=10 entries=1 op=nft_register_chain pid=1630 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:46.935000 audit[1630]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff8791560 a2=0 a3=1 items=0 ppid=1553 pid=1630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:46.935000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 12 19:23:46.936000 audit[1631]: NETFILTER_CFG table=nat:33 family=10 entries=1 op=nft_register_chain pid=1631 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:46.936000 audit[1631]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe22c0ae0 a2=0 a3=1 items=0 ppid=1553 pid=1631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:46.936000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 12 19:23:46.937000 audit[1632]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=1632 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:23:46.937000 audit[1632]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffa0a7fb0 a2=0 a3=1 items=0 ppid=1553 pid=1632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:23:46.937000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 12 19:23:47.004486 kubelet[1553]: E0212 19:23:47.004222 1553 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "10.0.0.89" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 19:23:47.087201 kubelet[1553]: I0212 19:23:47.087171 1553 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.89" Feb 12 19:23:47.088626 kubelet[1553]: E0212 19:23:47.088601 1553 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.89" Feb 12 19:23:47.088734 kubelet[1553]: E0212 19:23:47.088571 1553 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89.17b333f8ab269e3d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.89", UID:"10.0.0.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.89 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.89"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 46, 819890749, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 47, 87114957, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.89.17b333f8ab269e3d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:47.089689 kubelet[1553]: E0212 19:23:47.089622 1553 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89.17b333f8ab26d08b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.89", UID:"10.0.0.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.89 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.89"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 46, 819903627, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 47, 87126179, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.89.17b333f8ab26d08b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:47.181508 kubelet[1553]: E0212 19:23:47.181415 1553 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89.17b333f8ab26df07", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.89", UID:"10.0.0.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.89 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.89"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 46, 819907335, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 47, 87132115, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.89.17b333f8ab26df07" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:47.406204 kubelet[1553]: E0212 19:23:47.406092 1553 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "10.0.0.89" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 19:23:47.489924 kubelet[1553]: I0212 19:23:47.489889 1553 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.89" Feb 12 19:23:47.491168 kubelet[1553]: E0212 19:23:47.491089 1553 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89.17b333f8ab269e3d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.89", UID:"10.0.0.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.89 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.89"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 46, 819890749, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 47, 489832130, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.89.17b333f8ab269e3d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:47.491444 kubelet[1553]: E0212 19:23:47.491422 1553 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.89" Feb 12 19:23:47.580856 kubelet[1553]: E0212 19:23:47.580761 1553 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89.17b333f8ab26d08b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.89", UID:"10.0.0.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.89 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.89"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 46, 819903627, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 47, 489846686, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.89.17b333f8ab26d08b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:47.777327 kubelet[1553]: E0212 19:23:47.777147 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:47.781515 kubelet[1553]: E0212 19:23:47.781408 1553 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89.17b333f8ab26df07", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.89", UID:"10.0.0.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.89 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.89"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 46, 819907335, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 47, 489862746, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.89.17b333f8ab26df07" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:47.900544 kubelet[1553]: W0212 19:23:47.900514 1553 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:23:47.900916 kubelet[1553]: E0212 19:23:47.900898 1553 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:23:48.033788 kubelet[1553]: W0212 19:23:48.033689 1553 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.89" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:23:48.033935 kubelet[1553]: E0212 19:23:48.033922 1553 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.89" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:23:48.155059 kubelet[1553]: W0212 19:23:48.155027 1553 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:23:48.155208 kubelet[1553]: E0212 19:23:48.155197 1553 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:23:48.208364 kubelet[1553]: E0212 19:23:48.208331 1553 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "10.0.0.89" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 19:23:48.295006 kubelet[1553]: I0212 19:23:48.294972 1553 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.89" Feb 12 19:23:48.296565 kubelet[1553]: E0212 19:23:48.296507 1553 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.89" Feb 12 19:23:48.299036 kubelet[1553]: E0212 19:23:48.298956 1553 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89.17b333f8ab269e3d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.89", UID:"10.0.0.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.89 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.89"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 46, 819890749, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 48, 294931486, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.89.17b333f8ab269e3d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:48.300215 kubelet[1553]: E0212 19:23:48.300131 1553 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89.17b333f8ab26d08b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.89", UID:"10.0.0.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.89 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.89"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 46, 819903627, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 48, 294942279, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.89.17b333f8ab26d08b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:48.382802 kubelet[1553]: E0212 19:23:48.382690 1553 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89.17b333f8ab26df07", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.89", UID:"10.0.0.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.89 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.89"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 46, 819907335, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 48, 294945525, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.89.17b333f8ab26df07" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:48.389060 kubelet[1553]: W0212 19:23:48.389027 1553 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:23:48.389060 kubelet[1553]: E0212 19:23:48.389059 1553 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:23:48.777461 kubelet[1553]: E0212 19:23:48.777347 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:49.778262 kubelet[1553]: E0212 19:23:49.778213 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:49.810587 kubelet[1553]: E0212 19:23:49.810544 1553 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "10.0.0.89" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 19:23:49.816543 kubelet[1553]: W0212 19:23:49.816505 1553 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:23:49.816543 kubelet[1553]: E0212 19:23:49.816539 1553 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:23:49.897224 kubelet[1553]: I0212 19:23:49.897200 1553 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.89" Feb 12 19:23:49.899877 kubelet[1553]: E0212 19:23:49.899793 1553 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89.17b333f8ab269e3d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.89", UID:"10.0.0.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.89 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.89"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 46, 819890749, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 49, 897163393, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.89.17b333f8ab269e3d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:49.900108 kubelet[1553]: E0212 19:23:49.900083 1553 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.89" Feb 12 19:23:49.900763 kubelet[1553]: E0212 19:23:49.900710 1553 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89.17b333f8ab26d08b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.89", UID:"10.0.0.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.89 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.89"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 46, 819903627, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 49, 897171170, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.89.17b333f8ab26d08b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:49.901587 kubelet[1553]: E0212 19:23:49.901535 1553 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89.17b333f8ab26df07", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.89", UID:"10.0.0.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.89 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.89"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 46, 819907335, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 49, 897174248, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.89.17b333f8ab26df07" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:50.357934 kubelet[1553]: W0212 19:23:50.357895 1553 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.89" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:23:50.357934 kubelet[1553]: E0212 19:23:50.357932 1553 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.89" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:23:50.454936 kubelet[1553]: W0212 19:23:50.454884 1553 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:23:50.454936 kubelet[1553]: E0212 19:23:50.454921 1553 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:23:50.778756 kubelet[1553]: E0212 19:23:50.778614 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:51.222095 kubelet[1553]: W0212 19:23:51.222065 1553 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:23:51.222277 kubelet[1553]: E0212 19:23:51.222265 1553 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:23:51.779725 kubelet[1553]: E0212 19:23:51.779690 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:52.780538 kubelet[1553]: E0212 19:23:52.780467 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:53.012709 kubelet[1553]: E0212 19:23:53.012644 1553 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "10.0.0.89" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 19:23:53.101414 kubelet[1553]: I0212 19:23:53.101364 1553 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.89" Feb 12 19:23:53.102565 kubelet[1553]: E0212 19:23:53.102487 1553 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89.17b333f8ab269e3d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.89", UID:"10.0.0.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.89 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.89"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 46, 819890749, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 53, 101331501, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.89.17b333f8ab269e3d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:53.102758 kubelet[1553]: E0212 19:23:53.102746 1553 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.89" Feb 12 19:23:53.103436 kubelet[1553]: E0212 19:23:53.103369 1553 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89.17b333f8ab26d08b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.89", UID:"10.0.0.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.89 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.89"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 46, 819903627, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 53, 101336740, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.89.17b333f8ab26d08b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:53.104331 kubelet[1553]: E0212 19:23:53.104272 1553 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89.17b333f8ab26df07", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.89", UID:"10.0.0.89", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.89 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.89"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 23, 46, 819907335, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 23, 53, 101339278, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.89.17b333f8ab26df07" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:23:53.625382 kubelet[1553]: W0212 19:23:53.625351 1553 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.89" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:23:53.625553 kubelet[1553]: E0212 19:23:53.625542 1553 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.89" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:23:53.780805 kubelet[1553]: E0212 19:23:53.780755 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:54.308459 kubelet[1553]: W0212 19:23:54.308425 1553 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:23:54.308658 kubelet[1553]: E0212 19:23:54.308639 1553 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:23:54.782832 kubelet[1553]: E0212 19:23:54.782545 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:55.000942 kubelet[1553]: W0212 19:23:55.000913 1553 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:23:55.001128 kubelet[1553]: E0212 19:23:55.001117 1553 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:23:55.782812 kubelet[1553]: E0212 19:23:55.782768 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:55.827448 kubelet[1553]: W0212 19:23:55.827419 1553 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:23:55.827846 kubelet[1553]: E0212 19:23:55.827834 1553 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:23:56.763540 kubelet[1553]: I0212 19:23:56.763492 1553 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 12 19:23:56.783171 kubelet[1553]: E0212 19:23:56.783128 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:56.836738 kubelet[1553]: E0212 19:23:56.836701 1553 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.89\" not found" Feb 12 19:23:57.179634 kubelet[1553]: E0212 19:23:57.179603 1553 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.89" not found Feb 12 19:23:57.783488 kubelet[1553]: E0212 19:23:57.783453 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:58.210080 kubelet[1553]: E0212 19:23:58.210043 1553 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.89" not found Feb 12 19:23:58.783608 kubelet[1553]: E0212 19:23:58.783568 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:59.421626 kubelet[1553]: E0212 19:23:59.421114 1553 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.89\" not found" node="10.0.0.89" Feb 12 19:23:59.503542 kubelet[1553]: I0212 19:23:59.503510 1553 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.89" Feb 12 19:23:59.611133 kubelet[1553]: I0212 19:23:59.611091 1553 kubelet_node_status.go:73] "Successfully registered node" node="10.0.0.89" Feb 12 19:23:59.626805 kubelet[1553]: E0212 19:23:59.626770 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:23:59.727989 kubelet[1553]: E0212 19:23:59.727673 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:23:59.771790 sudo[1364]: pam_unix(sudo:session): session closed for user root Feb 12 19:23:59.770000 audit[1364]: USER_END pid=1364 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:23:59.772797 kernel: kauditd_printk_skb: 130 callbacks suppressed Feb 12 19:23:59.772858 kernel: audit: type=1106 audit(1707765839.770:191): pid=1364 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:23:59.770000 audit[1364]: CRED_DISP pid=1364 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:23:59.775445 sshd[1358]: pam_unix(sshd:session): session closed for user core Feb 12 19:23:59.777350 kernel: audit: type=1104 audit(1707765839.770:192): pid=1364 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 19:23:59.777000 audit[1358]: USER_END pid=1358 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:23:59.778000 audit[1358]: CRED_DISP pid=1358 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:23:59.783385 kernel: audit: type=1106 audit(1707765839.777:193): pid=1358 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:23:59.783487 kernel: audit: type=1104 audit(1707765839.778:194): pid=1358 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 12 19:23:59.783746 kubelet[1553]: E0212 19:23:59.783685 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:23:59.783772 systemd[1]: sshd@6-10.0.0.89:22-10.0.0.1:59056.service: Deactivated successfully. Feb 12 19:23:59.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.89:22-10.0.0.1:59056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:59.785191 systemd[1]: session-7.scope: Deactivated successfully. Feb 12 19:23:59.786533 kernel: audit: type=1131 audit(1707765839.782:195): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.89:22-10.0.0.1:59056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:23:59.786652 systemd-logind[1196]: Session 7 logged out. Waiting for processes to exit. Feb 12 19:23:59.787636 systemd-logind[1196]: Removed session 7. Feb 12 19:23:59.828421 kubelet[1553]: E0212 19:23:59.828381 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:23:59.929208 kubelet[1553]: E0212 19:23:59.929142 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:24:00.030276 kubelet[1553]: E0212 19:24:00.030155 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:24:00.131103 kubelet[1553]: E0212 19:24:00.131053 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:24:00.231382 kubelet[1553]: E0212 19:24:00.231345 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:24:00.332426 kubelet[1553]: E0212 19:24:00.332347 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:24:00.433059 kubelet[1553]: E0212 19:24:00.433026 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:24:00.533906 kubelet[1553]: E0212 19:24:00.533867 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:24:00.634845 kubelet[1553]: E0212 19:24:00.634710 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:24:00.737520 kubelet[1553]: E0212 19:24:00.736589 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:24:00.784752 kubelet[1553]: E0212 19:24:00.784714 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:00.837689 kubelet[1553]: E0212 19:24:00.837629 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:24:00.938354 kubelet[1553]: E0212 19:24:00.938183 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:24:01.039255 kubelet[1553]: E0212 19:24:01.039216 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:24:01.141728 kubelet[1553]: E0212 19:24:01.140224 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:24:01.242116 kubelet[1553]: E0212 19:24:01.241993 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:24:01.342572 kubelet[1553]: E0212 19:24:01.342533 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:24:01.443511 kubelet[1553]: E0212 19:24:01.443414 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:24:01.543602 kubelet[1553]: E0212 19:24:01.543570 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:24:01.644836 kubelet[1553]: E0212 19:24:01.644776 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:24:01.745845 kubelet[1553]: E0212 19:24:01.745767 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:24:01.785755 kubelet[1553]: E0212 19:24:01.785680 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:01.845992 kubelet[1553]: E0212 19:24:01.845886 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:24:01.946472 kubelet[1553]: E0212 19:24:01.946369 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:24:02.046787 kubelet[1553]: E0212 19:24:02.046722 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:24:02.147541 kubelet[1553]: E0212 19:24:02.147420 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:24:02.248189 kubelet[1553]: E0212 19:24:02.248129 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:24:02.349310 kubelet[1553]: E0212 19:24:02.349235 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:24:02.450485 kubelet[1553]: E0212 19:24:02.450350 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:24:02.551228 kubelet[1553]: E0212 19:24:02.551166 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:24:02.651701 kubelet[1553]: E0212 19:24:02.651657 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:24:02.752185 kubelet[1553]: E0212 19:24:02.752084 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:24:02.786871 kubelet[1553]: E0212 19:24:02.786804 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:02.852974 kubelet[1553]: E0212 19:24:02.852909 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:24:02.953168 kubelet[1553]: E0212 19:24:02.953098 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:24:03.054100 kubelet[1553]: E0212 19:24:03.054033 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:24:03.154396 kubelet[1553]: E0212 19:24:03.154311 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:24:03.255311 kubelet[1553]: E0212 19:24:03.255236 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:24:03.356754 kubelet[1553]: E0212 19:24:03.356130 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:24:03.457038 kubelet[1553]: E0212 19:24:03.456952 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:24:03.557852 kubelet[1553]: E0212 19:24:03.557736 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:24:03.658600 kubelet[1553]: E0212 19:24:03.658500 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:24:03.759121 kubelet[1553]: E0212 19:24:03.759072 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:24:03.787800 kubelet[1553]: E0212 19:24:03.787748 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:03.859538 kubelet[1553]: E0212 19:24:03.859495 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:24:03.959802 kubelet[1553]: E0212 19:24:03.959695 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:24:04.060384 kubelet[1553]: E0212 19:24:04.060337 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:24:04.160535 kubelet[1553]: E0212 19:24:04.160490 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:24:04.261873 kubelet[1553]: E0212 19:24:04.261768 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:24:04.362814 kubelet[1553]: E0212 19:24:04.362770 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:24:04.463912 kubelet[1553]: E0212 19:24:04.463874 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:24:04.564566 kubelet[1553]: E0212 19:24:04.564529 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:24:04.665431 kubelet[1553]: E0212 19:24:04.665396 1553 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Feb 12 19:24:04.766872 kubelet[1553]: I0212 19:24:04.766790 1553 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 12 19:24:04.767094 env[1211]: time="2024-02-12T19:24:04.767050016Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 19:24:04.767383 kubelet[1553]: I0212 19:24:04.767217 1553 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 12 19:24:04.788350 kubelet[1553]: I0212 19:24:04.788308 1553 apiserver.go:52] "Watching apiserver" Feb 12 19:24:04.788531 kubelet[1553]: E0212 19:24:04.788514 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:04.793372 kubelet[1553]: I0212 19:24:04.793331 1553 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:24:04.793491 kubelet[1553]: I0212 19:24:04.793429 1553 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:24:04.793491 kubelet[1553]: I0212 19:24:04.793460 1553 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:24:04.793628 kubelet[1553]: E0212 19:24:04.793598 1553 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rklhx" podUID=e5010fa5-3c3f-473b-8a11-14f74264629a Feb 12 19:24:04.885150 kubelet[1553]: I0212 19:24:04.885056 1553 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 19:24:04.903637 kubelet[1553]: I0212 19:24:04.903598 1553 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a121e6c9-41bb-4ffb-b007-598c3e362fca-node-certs\") pod \"calico-node-9jvlm\" (UID: \"a121e6c9-41bb-4ffb-b007-598c3e362fca\") " pod="calico-system/calico-node-9jvlm" Feb 12 19:24:04.903822 kubelet[1553]: I0212 19:24:04.903809 1553 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e82e0c82-9597-45ea-b1d4-2535fa603206-kube-proxy\") pod \"kube-proxy-df5fl\" (UID: \"e82e0c82-9597-45ea-b1d4-2535fa603206\") " pod="kube-system/kube-proxy-df5fl" Feb 12 19:24:04.903900 kubelet[1553]: I0212 19:24:04.903890 1553 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e5010fa5-3c3f-473b-8a11-14f74264629a-varrun\") pod \"csi-node-driver-rklhx\" (UID: \"e5010fa5-3c3f-473b-8a11-14f74264629a\") " pod="calico-system/csi-node-driver-rklhx" Feb 12 19:24:04.904023 kubelet[1553]: I0212 19:24:04.904011 1553 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggkt5\" (UniqueName: \"kubernetes.io/projected/e82e0c82-9597-45ea-b1d4-2535fa603206-kube-api-access-ggkt5\") pod \"kube-proxy-df5fl\" (UID: \"e82e0c82-9597-45ea-b1d4-2535fa603206\") " pod="kube-system/kube-proxy-df5fl" Feb 12 19:24:04.904108 kubelet[1553]: I0212 19:24:04.904097 1553 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a121e6c9-41bb-4ffb-b007-598c3e362fca-lib-modules\") pod \"calico-node-9jvlm\" (UID: \"a121e6c9-41bb-4ffb-b007-598c3e362fca\") " pod="calico-system/calico-node-9jvlm" Feb 12 19:24:04.904184 kubelet[1553]: I0212 19:24:04.904175 1553 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a121e6c9-41bb-4ffb-b007-598c3e362fca-xtables-lock\") pod \"calico-node-9jvlm\" (UID: \"a121e6c9-41bb-4ffb-b007-598c3e362fca\") " pod="calico-system/calico-node-9jvlm" Feb 12 19:24:04.904248 kubelet[1553]: I0212 19:24:04.904239 1553 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a121e6c9-41bb-4ffb-b007-598c3e362fca-tigera-ca-bundle\") pod \"calico-node-9jvlm\" (UID: \"a121e6c9-41bb-4ffb-b007-598c3e362fca\") " pod="calico-system/calico-node-9jvlm" Feb 12 19:24:04.904351 kubelet[1553]: I0212 19:24:04.904341 1553 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a121e6c9-41bb-4ffb-b007-598c3e362fca-var-run-calico\") pod \"calico-node-9jvlm\" (UID: \"a121e6c9-41bb-4ffb-b007-598c3e362fca\") " pod="calico-system/calico-node-9jvlm" Feb 12 19:24:04.904445 kubelet[1553]: I0212 19:24:04.904431 1553 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a121e6c9-41bb-4ffb-b007-598c3e362fca-cni-net-dir\") pod \"calico-node-9jvlm\" (UID: \"a121e6c9-41bb-4ffb-b007-598c3e362fca\") " pod="calico-system/calico-node-9jvlm" Feb 12 19:24:04.904580 kubelet[1553]: I0212 19:24:04.904539 1553 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a121e6c9-41bb-4ffb-b007-598c3e362fca-flexvol-driver-host\") pod \"calico-node-9jvlm\" (UID: \"a121e6c9-41bb-4ffb-b007-598c3e362fca\") " pod="calico-system/calico-node-9jvlm" Feb 12 19:24:04.904624 kubelet[1553]: I0212 19:24:04.904591 1553 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57mr5\" (UniqueName: \"kubernetes.io/projected/a121e6c9-41bb-4ffb-b007-598c3e362fca-kube-api-access-57mr5\") pod \"calico-node-9jvlm\" (UID: \"a121e6c9-41bb-4ffb-b007-598c3e362fca\") " pod="calico-system/calico-node-9jvlm" Feb 12 19:24:04.904656 kubelet[1553]: I0212 19:24:04.904626 1553 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e5010fa5-3c3f-473b-8a11-14f74264629a-socket-dir\") pod \"csi-node-driver-rklhx\" (UID: \"e5010fa5-3c3f-473b-8a11-14f74264629a\") " pod="calico-system/csi-node-driver-rklhx" Feb 12 19:24:04.904656 kubelet[1553]: I0212 19:24:04.904652 1553 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a121e6c9-41bb-4ffb-b007-598c3e362fca-policysync\") pod \"calico-node-9jvlm\" (UID: \"a121e6c9-41bb-4ffb-b007-598c3e362fca\") " pod="calico-system/calico-node-9jvlm" Feb 12 19:24:04.904709 kubelet[1553]: I0212 19:24:04.904673 1553 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a121e6c9-41bb-4ffb-b007-598c3e362fca-cni-log-dir\") pod \"calico-node-9jvlm\" (UID: \"a121e6c9-41bb-4ffb-b007-598c3e362fca\") " pod="calico-system/calico-node-9jvlm" Feb 12 19:24:04.904709 kubelet[1553]: I0212 19:24:04.904703 1553 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e82e0c82-9597-45ea-b1d4-2535fa603206-xtables-lock\") pod \"kube-proxy-df5fl\" (UID: \"e82e0c82-9597-45ea-b1d4-2535fa603206\") " pod="kube-system/kube-proxy-df5fl" Feb 12 19:24:04.904751 kubelet[1553]: I0212 19:24:04.904722 1553 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e82e0c82-9597-45ea-b1d4-2535fa603206-lib-modules\") pod \"kube-proxy-df5fl\" (UID: \"e82e0c82-9597-45ea-b1d4-2535fa603206\") " pod="kube-system/kube-proxy-df5fl" Feb 12 19:24:04.904751 kubelet[1553]: I0212 19:24:04.904745 1553 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e5010fa5-3c3f-473b-8a11-14f74264629a-kubelet-dir\") pod \"csi-node-driver-rklhx\" (UID: \"e5010fa5-3c3f-473b-8a11-14f74264629a\") " pod="calico-system/csi-node-driver-rklhx" Feb 12 19:24:04.904795 kubelet[1553]: I0212 19:24:04.904774 1553 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e5010fa5-3c3f-473b-8a11-14f74264629a-registration-dir\") pod \"csi-node-driver-rklhx\" (UID: \"e5010fa5-3c3f-473b-8a11-14f74264629a\") " pod="calico-system/csi-node-driver-rklhx" Feb 12 19:24:04.904818 kubelet[1553]: I0212 19:24:04.904796 1553 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqhg4\" (UniqueName: \"kubernetes.io/projected/e5010fa5-3c3f-473b-8a11-14f74264629a-kube-api-access-sqhg4\") pod \"csi-node-driver-rklhx\" (UID: \"e5010fa5-3c3f-473b-8a11-14f74264629a\") " pod="calico-system/csi-node-driver-rklhx" Feb 12 19:24:04.904818 kubelet[1553]: I0212 19:24:04.904816 1553 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a121e6c9-41bb-4ffb-b007-598c3e362fca-var-lib-calico\") pod \"calico-node-9jvlm\" (UID: \"a121e6c9-41bb-4ffb-b007-598c3e362fca\") " pod="calico-system/calico-node-9jvlm" Feb 12 19:24:04.904861 kubelet[1553]: I0212 19:24:04.904837 1553 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a121e6c9-41bb-4ffb-b007-598c3e362fca-cni-bin-dir\") pod \"calico-node-9jvlm\" (UID: \"a121e6c9-41bb-4ffb-b007-598c3e362fca\") " pod="calico-system/calico-node-9jvlm" Feb 12 19:24:04.904861 kubelet[1553]: I0212 19:24:04.904851 1553 reconciler.go:41] "Reconciler: start to sync state" Feb 12 19:24:05.007948 kubelet[1553]: E0212 19:24:05.007913 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.007948 kubelet[1553]: W0212 19:24:05.007941 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.008110 kubelet[1553]: E0212 19:24:05.007999 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.008227 kubelet[1553]: E0212 19:24:05.008198 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.008227 kubelet[1553]: W0212 19:24:05.008211 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.008301 kubelet[1553]: E0212 19:24:05.008252 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.008377 kubelet[1553]: E0212 19:24:05.008364 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.008377 kubelet[1553]: W0212 19:24:05.008375 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.008449 kubelet[1553]: E0212 19:24:05.008423 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.008553 kubelet[1553]: E0212 19:24:05.008523 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.008553 kubelet[1553]: W0212 19:24:05.008535 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.008605 kubelet[1553]: E0212 19:24:05.008580 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.008701 kubelet[1553]: E0212 19:24:05.008688 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.008701 kubelet[1553]: W0212 19:24:05.008699 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.008766 kubelet[1553]: E0212 19:24:05.008751 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.011744 kubelet[1553]: E0212 19:24:05.011227 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.011744 kubelet[1553]: W0212 19:24:05.011253 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.011744 kubelet[1553]: E0212 19:24:05.011504 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.011744 kubelet[1553]: W0212 19:24:05.011513 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.011744 kubelet[1553]: E0212 19:24:05.011638 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.011744 kubelet[1553]: W0212 19:24:05.011644 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.011744 kubelet[1553]: E0212 19:24:05.011746 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.011977 kubelet[1553]: E0212 19:24:05.011782 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.011977 kubelet[1553]: E0212 19:24:05.011808 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.011977 kubelet[1553]: E0212 19:24:05.011899 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.011977 kubelet[1553]: W0212 19:24:05.011907 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.012080 kubelet[1553]: E0212 19:24:05.011980 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.012113 kubelet[1553]: E0212 19:24:05.012096 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.012113 kubelet[1553]: W0212 19:24:05.012103 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.012192 kubelet[1553]: E0212 19:24:05.012173 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.012383 kubelet[1553]: E0212 19:24:05.012258 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.012383 kubelet[1553]: W0212 19:24:05.012269 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.012383 kubelet[1553]: E0212 19:24:05.012360 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.012501 kubelet[1553]: E0212 19:24:05.012485 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.012501 kubelet[1553]: W0212 19:24:05.012498 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.012603 kubelet[1553]: E0212 19:24:05.012586 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.012696 kubelet[1553]: E0212 19:24:05.012652 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.012764 kubelet[1553]: W0212 19:24:05.012751 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.012842 kubelet[1553]: E0212 19:24:05.012824 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.013062 kubelet[1553]: E0212 19:24:05.013047 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.013135 kubelet[1553]: W0212 19:24:05.013122 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.013212 kubelet[1553]: E0212 19:24:05.013195 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.013423 kubelet[1553]: E0212 19:24:05.013408 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.013494 kubelet[1553]: W0212 19:24:05.013482 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.013566 kubelet[1553]: E0212 19:24:05.013551 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.013758 kubelet[1553]: E0212 19:24:05.013745 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.013823 kubelet[1553]: W0212 19:24:05.013812 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.013893 kubelet[1553]: E0212 19:24:05.013878 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.014088 kubelet[1553]: E0212 19:24:05.014076 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.014155 kubelet[1553]: W0212 19:24:05.014144 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.014228 kubelet[1553]: E0212 19:24:05.014215 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.014432 kubelet[1553]: E0212 19:24:05.014418 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.014499 kubelet[1553]: W0212 19:24:05.014486 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.014580 kubelet[1553]: E0212 19:24:05.014565 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.014768 kubelet[1553]: E0212 19:24:05.014756 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.014833 kubelet[1553]: W0212 19:24:05.014822 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.014903 kubelet[1553]: E0212 19:24:05.014889 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.015080 kubelet[1553]: E0212 19:24:05.015067 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.015143 kubelet[1553]: W0212 19:24:05.015131 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.015220 kubelet[1553]: E0212 19:24:05.015207 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.015536 kubelet[1553]: E0212 19:24:05.015517 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.015701 kubelet[1553]: W0212 19:24:05.015682 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.015801 kubelet[1553]: E0212 19:24:05.015782 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.016013 kubelet[1553]: E0212 19:24:05.015999 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.016080 kubelet[1553]: W0212 19:24:05.016068 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.016168 kubelet[1553]: E0212 19:24:05.016154 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.016363 kubelet[1553]: E0212 19:24:05.016349 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.016430 kubelet[1553]: W0212 19:24:05.016418 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.016506 kubelet[1553]: E0212 19:24:05.016490 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.016705 kubelet[1553]: E0212 19:24:05.016692 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.016781 kubelet[1553]: W0212 19:24:05.016768 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.016863 kubelet[1553]: E0212 19:24:05.016847 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.017039 kubelet[1553]: E0212 19:24:05.017026 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.017102 kubelet[1553]: W0212 19:24:05.017090 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.017174 kubelet[1553]: E0212 19:24:05.017159 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.017352 kubelet[1553]: E0212 19:24:05.017339 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.017419 kubelet[1553]: W0212 19:24:05.017406 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.017505 kubelet[1553]: E0212 19:24:05.017490 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.017688 kubelet[1553]: E0212 19:24:05.017674 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.017769 kubelet[1553]: W0212 19:24:05.017755 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.017847 kubelet[1553]: E0212 19:24:05.017832 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.018024 kubelet[1553]: E0212 19:24:05.018010 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.018087 kubelet[1553]: W0212 19:24:05.018076 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.018169 kubelet[1553]: E0212 19:24:05.018154 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.018352 kubelet[1553]: E0212 19:24:05.018339 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.018420 kubelet[1553]: W0212 19:24:05.018408 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.018493 kubelet[1553]: E0212 19:24:05.018478 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.018674 kubelet[1553]: E0212 19:24:05.018661 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.018752 kubelet[1553]: W0212 19:24:05.018729 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.018848 kubelet[1553]: E0212 19:24:05.018831 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.019089 kubelet[1553]: E0212 19:24:05.019074 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.019163 kubelet[1553]: W0212 19:24:05.019150 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.019235 kubelet[1553]: E0212 19:24:05.019220 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.019498 kubelet[1553]: E0212 19:24:05.019484 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.019570 kubelet[1553]: W0212 19:24:05.019558 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.019662 kubelet[1553]: E0212 19:24:05.019645 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.019882 kubelet[1553]: E0212 19:24:05.019867 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.019955 kubelet[1553]: W0212 19:24:05.019941 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.020029 kubelet[1553]: E0212 19:24:05.020013 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.020209 kubelet[1553]: E0212 19:24:05.020195 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.020279 kubelet[1553]: W0212 19:24:05.020265 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.020483 kubelet[1553]: E0212 19:24:05.020424 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.020766 kubelet[1553]: E0212 19:24:05.020750 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.020843 kubelet[1553]: W0212 19:24:05.020829 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.020938 kubelet[1553]: E0212 19:24:05.020914 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.021166 kubelet[1553]: E0212 19:24:05.021153 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.021234 kubelet[1553]: W0212 19:24:05.021220 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.021378 kubelet[1553]: E0212 19:24:05.021361 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.021547 kubelet[1553]: E0212 19:24:05.021533 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.021607 kubelet[1553]: W0212 19:24:05.021597 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.021677 kubelet[1553]: E0212 19:24:05.021664 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.021888 kubelet[1553]: E0212 19:24:05.021875 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.021956 kubelet[1553]: W0212 19:24:05.021943 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.022073 kubelet[1553]: E0212 19:24:05.022050 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.022228 kubelet[1553]: E0212 19:24:05.022215 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.022301 kubelet[1553]: W0212 19:24:05.022277 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.022379 kubelet[1553]: E0212 19:24:05.022364 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.022602 kubelet[1553]: E0212 19:24:05.022588 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.022676 kubelet[1553]: W0212 19:24:05.022664 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.022734 kubelet[1553]: E0212 19:24:05.022724 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.106015 kubelet[1553]: E0212 19:24:05.105981 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.106015 kubelet[1553]: W0212 19:24:05.106004 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.106015 kubelet[1553]: E0212 19:24:05.106025 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.106268 kubelet[1553]: E0212 19:24:05.106252 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.106268 kubelet[1553]: W0212 19:24:05.106265 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.106349 kubelet[1553]: E0212 19:24:05.106278 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.106508 kubelet[1553]: E0212 19:24:05.106482 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.106508 kubelet[1553]: W0212 19:24:05.106494 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.106508 kubelet[1553]: E0212 19:24:05.106506 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.207695 kubelet[1553]: E0212 19:24:05.207605 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.207829 kubelet[1553]: W0212 19:24:05.207813 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.207891 kubelet[1553]: E0212 19:24:05.207881 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.210300 kubelet[1553]: E0212 19:24:05.210279 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.210413 kubelet[1553]: W0212 19:24:05.210399 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.210483 kubelet[1553]: E0212 19:24:05.210473 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.210818 kubelet[1553]: E0212 19:24:05.210803 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.210900 kubelet[1553]: W0212 19:24:05.210888 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.210954 kubelet[1553]: E0212 19:24:05.210945 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.311619 kubelet[1553]: E0212 19:24:05.311584 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.311619 kubelet[1553]: W0212 19:24:05.311606 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.311619 kubelet[1553]: E0212 19:24:05.311626 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.311851 kubelet[1553]: E0212 19:24:05.311826 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.311851 kubelet[1553]: W0212 19:24:05.311837 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.311851 kubelet[1553]: E0212 19:24:05.311853 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.312030 kubelet[1553]: E0212 19:24:05.312010 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.312030 kubelet[1553]: W0212 19:24:05.312022 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.312086 kubelet[1553]: E0212 19:24:05.312033 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.413956 kubelet[1553]: E0212 19:24:05.413361 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.413956 kubelet[1553]: W0212 19:24:05.413382 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.413956 kubelet[1553]: E0212 19:24:05.413411 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.413956 kubelet[1553]: E0212 19:24:05.413653 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.413956 kubelet[1553]: W0212 19:24:05.413663 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.413956 kubelet[1553]: E0212 19:24:05.413676 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.413956 kubelet[1553]: E0212 19:24:05.413864 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.413956 kubelet[1553]: W0212 19:24:05.413873 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.413956 kubelet[1553]: E0212 19:24:05.413884 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.513653 kubelet[1553]: E0212 19:24:05.513501 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.513653 kubelet[1553]: W0212 19:24:05.513522 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.513653 kubelet[1553]: E0212 19:24:05.513541 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.515564 kubelet[1553]: E0212 19:24:05.515528 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.515564 kubelet[1553]: W0212 19:24:05.515545 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.515564 kubelet[1553]: E0212 19:24:05.515564 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.515870 kubelet[1553]: E0212 19:24:05.515857 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.515870 kubelet[1553]: W0212 19:24:05.515870 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.515984 kubelet[1553]: E0212 19:24:05.515882 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.616959 kubelet[1553]: E0212 19:24:05.616934 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.617106 kubelet[1553]: W0212 19:24:05.617091 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.617210 kubelet[1553]: E0212 19:24:05.617198 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.617489 kubelet[1553]: E0212 19:24:05.617476 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.617566 kubelet[1553]: W0212 19:24:05.617554 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.617659 kubelet[1553]: E0212 19:24:05.617647 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.698481 kubelet[1553]: E0212 19:24:05.698454 1553 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:24:05.699565 env[1211]: time="2024-02-12T19:24:05.699272370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9jvlm,Uid:a121e6c9-41bb-4ffb-b007-598c3e362fca,Namespace:calico-system,Attempt:0,}" Feb 12 19:24:05.715990 kubelet[1553]: E0212 19:24:05.715965 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.715990 kubelet[1553]: W0212 19:24:05.715985 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.716136 kubelet[1553]: E0212 19:24:05.716007 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.719168 kubelet[1553]: E0212 19:24:05.719144 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.719316 kubelet[1553]: W0212 19:24:05.719275 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.719387 kubelet[1553]: E0212 19:24:05.719375 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.790436 kubelet[1553]: E0212 19:24:05.789576 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:05.820755 kubelet[1553]: E0212 19:24:05.820706 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.820755 kubelet[1553]: W0212 19:24:05.820732 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.820755 kubelet[1553]: E0212 19:24:05.820758 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.914034 kubelet[1553]: E0212 19:24:05.914000 1553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 19:24:05.914034 kubelet[1553]: W0212 19:24:05.914022 1553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 19:24:05.914034 kubelet[1553]: E0212 19:24:05.914042 1553 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 19:24:05.997166 kubelet[1553]: E0212 19:24:05.997127 1553 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:24:05.997872 env[1211]: time="2024-02-12T19:24:05.997818176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-df5fl,Uid:e82e0c82-9597-45ea-b1d4-2535fa603206,Namespace:kube-system,Attempt:0,}" Feb 12 19:24:06.176150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1998229329.mount: Deactivated successfully. Feb 12 19:24:06.181570 env[1211]: time="2024-02-12T19:24:06.181524389Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:06.183465 env[1211]: time="2024-02-12T19:24:06.183422821Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:06.185204 env[1211]: time="2024-02-12T19:24:06.185176623Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:06.187463 env[1211]: time="2024-02-12T19:24:06.187410051Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:06.189151 env[1211]: time="2024-02-12T19:24:06.189122760Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:06.191533 env[1211]: time="2024-02-12T19:24:06.191501737Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:06.193130 env[1211]: time="2024-02-12T19:24:06.193097214Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:06.195131 env[1211]: time="2024-02-12T19:24:06.195095815Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:06.240968 env[1211]: time="2024-02-12T19:24:06.240858183Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:24:06.240968 env[1211]: time="2024-02-12T19:24:06.240911453Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:24:06.240968 env[1211]: time="2024-02-12T19:24:06.240923028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:24:06.241420 env[1211]: time="2024-02-12T19:24:06.241352026Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3516b0e5138bec63c2c4eb8eba392f816d409015d41161082fa2538afe8f2763 pid=1716 runtime=io.containerd.runc.v2 Feb 12 19:24:06.241833 env[1211]: time="2024-02-12T19:24:06.241662390Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:24:06.241833 env[1211]: time="2024-02-12T19:24:06.241697155Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:24:06.241833 env[1211]: time="2024-02-12T19:24:06.241719224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:24:06.242243 env[1211]: time="2024-02-12T19:24:06.242191319Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2e815e799980048b222e9fdfa0dcacd0ef526a5b6dc1641e405cee0442c3801e pid=1717 runtime=io.containerd.runc.v2 Feb 12 19:24:06.325318 env[1211]: time="2024-02-12T19:24:06.325255442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9jvlm,Uid:a121e6c9-41bb-4ffb-b007-598c3e362fca,Namespace:calico-system,Attempt:0,} returns sandbox id \"3516b0e5138bec63c2c4eb8eba392f816d409015d41161082fa2538afe8f2763\"" Feb 12 19:24:06.326171 kubelet[1553]: E0212 19:24:06.326136 1553 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:24:06.327771 env[1211]: time="2024-02-12T19:24:06.327724816Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\"" Feb 12 19:24:06.332701 env[1211]: time="2024-02-12T19:24:06.332649026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-df5fl,Uid:e82e0c82-9597-45ea-b1d4-2535fa603206,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e815e799980048b222e9fdfa0dcacd0ef526a5b6dc1641e405cee0442c3801e\"" Feb 12 19:24:06.333754 kubelet[1553]: E0212 19:24:06.333732 1553 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:24:06.777370 kubelet[1553]: E0212 19:24:06.777317 1553 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:06.790657 kubelet[1553]: E0212 19:24:06.790610 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:06.936072 kubelet[1553]: E0212 19:24:06.936018 1553 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rklhx" podUID=e5010fa5-3c3f-473b-8a11-14f74264629a Feb 12 19:24:07.568629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2971391743.mount: Deactivated successfully. Feb 12 19:24:07.771392 env[1211]: time="2024-02-12T19:24:07.771338754Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:07.773121 env[1211]: time="2024-02-12T19:24:07.773088428Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbddd33ed55a4a5c129e8f09945d426860425b9778d9402efe7bcefea7990a57,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:07.774500 env[1211]: time="2024-02-12T19:24:07.774474286Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:07.775586 env[1211]: time="2024-02-12T19:24:07.775555158Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b05edbd1f80db4ada229e6001a666a7dd36bb6ab617143684fb3d28abfc4b71e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:07.775996 env[1211]: time="2024-02-12T19:24:07.775969830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\" returns image reference \"sha256:cbddd33ed55a4a5c129e8f09945d426860425b9778d9402efe7bcefea7990a57\"" Feb 12 19:24:07.776860 env[1211]: time="2024-02-12T19:24:07.776836818Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 12 19:24:07.779712 env[1211]: time="2024-02-12T19:24:07.779684862Z" level=info msg="CreateContainer within sandbox \"3516b0e5138bec63c2c4eb8eba392f816d409015d41161082fa2538afe8f2763\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 12 19:24:07.790992 kubelet[1553]: E0212 19:24:07.790945 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:07.803527 env[1211]: time="2024-02-12T19:24:07.803476244Z" level=info msg="CreateContainer within sandbox \"3516b0e5138bec63c2c4eb8eba392f816d409015d41161082fa2538afe8f2763\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"95822b2e9b544f2b4b92c17755c8401b0e5ece14497448e57a2dd3915124f264\"" Feb 12 19:24:07.804252 env[1211]: time="2024-02-12T19:24:07.804224617Z" level=info msg="StartContainer for \"95822b2e9b544f2b4b92c17755c8401b0e5ece14497448e57a2dd3915124f264\"" Feb 12 19:24:07.872008 env[1211]: time="2024-02-12T19:24:07.871495889Z" level=info msg="StartContainer for \"95822b2e9b544f2b4b92c17755c8401b0e5ece14497448e57a2dd3915124f264\" returns successfully" Feb 12 19:24:07.973667 kubelet[1553]: E0212 19:24:07.973622 1553 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:24:08.069038 env[1211]: time="2024-02-12T19:24:08.068987502Z" level=info msg="shim disconnected" id=95822b2e9b544f2b4b92c17755c8401b0e5ece14497448e57a2dd3915124f264 Feb 12 19:24:08.069229 env[1211]: time="2024-02-12T19:24:08.069045199Z" level=warning msg="cleaning up after shim disconnected" id=95822b2e9b544f2b4b92c17755c8401b0e5ece14497448e57a2dd3915124f264 namespace=k8s.io Feb 12 19:24:08.069229 env[1211]: time="2024-02-12T19:24:08.069056090Z" level=info msg="cleaning up dead shim" Feb 12 19:24:08.076781 env[1211]: time="2024-02-12T19:24:08.076734745Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:24:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1836 runtime=io.containerd.runc.v2\n" Feb 12 19:24:08.791479 kubelet[1553]: E0212 19:24:08.791414 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:08.936738 kubelet[1553]: E0212 19:24:08.936689 1553 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rklhx" podUID=e5010fa5-3c3f-473b-8a11-14f74264629a Feb 12 19:24:08.976750 kubelet[1553]: E0212 19:24:08.976692 1553 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:24:09.085169 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1462207256.mount: Deactivated successfully. Feb 12 19:24:09.432304 env[1211]: time="2024-02-12T19:24:09.432019091Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:09.434341 env[1211]: time="2024-02-12T19:24:09.434297919Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:09.436000 env[1211]: time="2024-02-12T19:24:09.435956126Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:09.438018 env[1211]: time="2024-02-12T19:24:09.437959474Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:09.438272 env[1211]: time="2024-02-12T19:24:09.438233233Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926\"" Feb 12 19:24:09.439655 env[1211]: time="2024-02-12T19:24:09.439610074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\"" Feb 12 19:24:09.440980 env[1211]: time="2024-02-12T19:24:09.440938192Z" level=info msg="CreateContainer within sandbox \"2e815e799980048b222e9fdfa0dcacd0ef526a5b6dc1641e405cee0442c3801e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 19:24:09.456154 env[1211]: time="2024-02-12T19:24:09.456096697Z" level=info msg="CreateContainer within sandbox \"2e815e799980048b222e9fdfa0dcacd0ef526a5b6dc1641e405cee0442c3801e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fb2fb67ff0fc762b0409fcb109fb2022834b48063f3b4f2b69a3f2226e376f14\"" Feb 12 19:24:09.457062 env[1211]: time="2024-02-12T19:24:09.457020863Z" level=info msg="StartContainer for \"fb2fb67ff0fc762b0409fcb109fb2022834b48063f3b4f2b69a3f2226e376f14\"" Feb 12 19:24:09.531717 env[1211]: time="2024-02-12T19:24:09.531653333Z" level=info msg="StartContainer for \"fb2fb67ff0fc762b0409fcb109fb2022834b48063f3b4f2b69a3f2226e376f14\" returns successfully" Feb 12 19:24:09.659000 audit[1905]: NETFILTER_CFG table=mangle:35 family=2 entries=1 op=nft_register_chain pid=1905 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:24:09.659000 audit[1905]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc09c11e0 a2=0 a3=ffff9ab296c0 items=0 ppid=1866 pid=1905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:09.668886 kernel: audit: type=1325 audit(1707765849.659:196): table=mangle:35 family=2 entries=1 op=nft_register_chain pid=1905 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:24:09.669034 kernel: audit: type=1300 audit(1707765849.659:196): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc09c11e0 a2=0 a3=ffff9ab296c0 items=0 ppid=1866 pid=1905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:09.669060 kernel: audit: type=1327 audit(1707765849.659:196): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 12 19:24:09.659000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 12 19:24:09.662000 audit[1907]: NETFILTER_CFG table=nat:36 family=2 entries=1 op=nft_register_chain pid=1907 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:24:09.671858 kernel: audit: type=1325 audit(1707765849.662:197): table=nat:36 family=2 entries=1 op=nft_register_chain pid=1907 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:24:09.671955 kernel: audit: type=1300 audit(1707765849.662:197): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff9696e10 a2=0 a3=ffffb900f6c0 items=0 ppid=1866 pid=1907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:09.662000 audit[1907]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff9696e10 a2=0 a3=ffffb900f6c0 items=0 ppid=1866 pid=1907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:09.662000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 12 19:24:09.676273 kernel: audit: type=1327 audit(1707765849.662:197): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 12 19:24:09.676369 kernel: audit: type=1325 audit(1707765849.663:198): table=mangle:37 family=10 entries=1 op=nft_register_chain pid=1906 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:24:09.663000 audit[1906]: NETFILTER_CFG table=mangle:37 family=10 entries=1 op=nft_register_chain pid=1906 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:24:09.677673 kernel: audit: type=1300 audit(1707765849.663:198): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe7e319e0 a2=0 a3=ffffb1a9c6c0 items=0 ppid=1866 pid=1906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:09.663000 audit[1906]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe7e319e0 a2=0 a3=ffffb1a9c6c0 items=0 ppid=1866 pid=1906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:09.680692 kernel: audit: type=1327 audit(1707765849.663:198): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 12 19:24:09.663000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 12 19:24:09.682132 kernel: audit: type=1325 audit(1707765849.664:199): table=filter:38 family=2 entries=1 op=nft_register_chain pid=1909 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:24:09.664000 audit[1909]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_chain pid=1909 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:24:09.664000 audit[1909]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff19da240 a2=0 a3=ffff8140f6c0 items=0 ppid=1866 pid=1909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:09.664000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 12 19:24:09.664000 audit[1910]: NETFILTER_CFG table=nat:39 family=10 entries=1 op=nft_register_chain pid=1910 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:24:09.664000 audit[1910]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffef0cb640 a2=0 a3=ffffa591b6c0 items=0 ppid=1866 pid=1910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:09.664000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 12 19:24:09.667000 audit[1911]: NETFILTER_CFG table=filter:40 family=10 entries=1 op=nft_register_chain pid=1911 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:24:09.667000 audit[1911]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffda288f30 a2=0 a3=ffffa94ab6c0 items=0 ppid=1866 pid=1911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:09.667000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 12 19:24:09.762000 audit[1912]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=1912 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:24:09.762000 audit[1912]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffc55cf230 a2=0 a3=ffff989026c0 items=0 ppid=1866 pid=1912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:09.762000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 12 19:24:09.766000 audit[1914]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=1914 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:24:09.766000 audit[1914]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffeb10eeb0 a2=0 a3=ffff9cd146c0 items=0 ppid=1866 pid=1914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:09.766000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Feb 12 19:24:09.771000 audit[1917]: NETFILTER_CFG table=filter:43 family=2 entries=2 op=nft_register_chain pid=1917 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:24:09.771000 audit[1917]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffc353b200 a2=0 a3=ffff820156c0 items=0 ppid=1866 pid=1917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:09.771000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Feb 12 19:24:09.773000 audit[1918]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=1918 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:24:09.773000 audit[1918]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff1ef7280 a2=0 a3=ffffaa6436c0 items=0 ppid=1866 pid=1918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:09.773000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 12 19:24:09.778000 audit[1920]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=1920 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:24:09.778000 audit[1920]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff8424ee0 a2=0 a3=ffff8e61d6c0 items=0 ppid=1866 pid=1920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:09.778000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 12 19:24:09.779000 audit[1921]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=1921 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:24:09.779000 audit[1921]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe7fb1720 a2=0 a3=ffff86e3b6c0 items=0 ppid=1866 pid=1921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:09.779000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 12 19:24:09.784000 audit[1923]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_rule pid=1923 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:24:09.784000 audit[1923]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffca439330 a2=0 a3=ffffbb8546c0 items=0 ppid=1866 pid=1923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:09.784000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 12 19:24:09.791670 kubelet[1553]: E0212 19:24:09.791634 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:09.791000 audit[1926]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=1926 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:24:09.791000 audit[1926]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd76a6750 a2=0 a3=ffffb59596c0 items=0 ppid=1866 pid=1926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:09.791000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Feb 12 19:24:09.793000 audit[1927]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=1927 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:24:09.793000 audit[1927]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe6df80a0 a2=0 a3=ffff9dd176c0 items=0 ppid=1866 pid=1927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:09.793000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 12 19:24:09.797000 audit[1929]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=1929 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:24:09.797000 audit[1929]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffaa96fe0 a2=0 a3=ffffa82556c0 items=0 ppid=1866 pid=1929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:09.797000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 12 19:24:09.800000 audit[1930]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=1930 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:24:09.800000 audit[1930]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffddfc57e0 a2=0 a3=ffff841e56c0 items=0 ppid=1866 pid=1930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:09.800000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 12 19:24:09.803000 audit[1932]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_rule pid=1932 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:24:09.803000 audit[1932]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff44b4e50 a2=0 a3=ffffbabd96c0 items=0 ppid=1866 pid=1932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:09.803000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 12 19:24:09.808000 audit[1935]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=1935 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:24:09.808000 audit[1935]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd91a72f0 a2=0 a3=ffff83e886c0 items=0 ppid=1866 pid=1935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:09.808000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 12 19:24:09.813000 audit[1938]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=1938 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:24:09.813000 audit[1938]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffdc8fb360 a2=0 a3=ffffb4ae26c0 items=0 ppid=1866 pid=1938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:09.813000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 12 19:24:09.814000 audit[1939]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=1939 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:24:09.814000 audit[1939]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc09a4b20 a2=0 a3=ffffae69f6c0 items=0 ppid=1866 pid=1939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:09.814000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 12 19:24:09.819000 audit[1941]: NETFILTER_CFG table=nat:56 family=2 entries=2 op=nft_register_chain pid=1941 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:24:09.819000 audit[1941]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffcf9e61c0 a2=0 a3=ffff82b636c0 items=0 ppid=1866 pid=1941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:09.819000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 12 19:24:09.823000 audit[1944]: NETFILTER_CFG table=nat:57 family=2 entries=2 op=nft_register_chain pid=1944 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 19:24:09.823000 audit[1944]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=fffff6df6d30 a2=0 a3=ffffb4c3a6c0 items=0 ppid=1866 pid=1944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:09.823000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 12 19:24:09.830000 audit[1948]: NETFILTER_CFG table=filter:58 family=2 entries=3 op=nft_register_rule pid=1948 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:24:09.830000 audit[1948]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffd03fcb40 a2=0 a3=ffff962ae6c0 items=0 ppid=1866 pid=1948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:09.830000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:24:09.848000 audit[1948]: NETFILTER_CFG table=nat:59 family=2 entries=57 op=nft_register_chain pid=1948 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:24:09.848000 audit[1948]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffd03fcb40 a2=0 a3=ffff962ae6c0 items=0 ppid=1866 pid=1948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:09.848000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:24:09.862000 audit[1955]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_chain pid=1955 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:24:09.862000 audit[1955]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffcf8a87a0 a2=0 a3=ffff944046c0 items=0 ppid=1866 pid=1955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:09.862000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 12 19:24:09.865000 audit[1957]: NETFILTER_CFG table=filter:61 family=10 entries=2 op=nft_register_chain pid=1957 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:24:09.865000 audit[1957]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=fffff3646fe0 a2=0 a3=ffffa1fa76c0 items=0 ppid=1866 pid=1957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:09.865000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Feb 12 19:24:09.871000 audit[1960]: NETFILTER_CFG table=filter:62 family=10 entries=2 op=nft_register_chain pid=1960 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:24:09.871000 audit[1960]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffcf72d450 a2=0 a3=ffffbca206c0 items=0 ppid=1866 pid=1960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:09.871000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Feb 12 19:24:09.872000 audit[1961]: NETFILTER_CFG table=filter:63 family=10 entries=1 op=nft_register_chain pid=1961 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:24:09.872000 audit[1961]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc8bdda90 a2=0 a3=ffff8309c6c0 items=0 ppid=1866 pid=1961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:09.872000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 12 19:24:09.876000 audit[1963]: NETFILTER_CFG table=filter:64 family=10 entries=1 op=nft_register_rule pid=1963 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:24:09.876000 audit[1963]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc9427f00 a2=0 a3=ffff8a7016c0 items=0 ppid=1866 pid=1963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:09.876000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 12 19:24:09.878000 audit[1964]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=1964 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:24:09.878000 audit[1964]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff95d42e0 a2=0 a3=ffffbf1226c0 items=0 ppid=1866 pid=1964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:09.878000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 12 19:24:09.881000 audit[1966]: NETFILTER_CFG table=filter:66 family=10 entries=1 op=nft_register_rule pid=1966 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:24:09.881000 audit[1966]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffffdc620f0 a2=0 a3=ffffac5856c0 items=0 ppid=1866 pid=1966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:09.881000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Feb 12 19:24:09.886000 audit[1969]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=1969 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:24:09.886000 audit[1969]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffdc805f70 a2=0 a3=ffff8a4186c0 items=0 ppid=1866 pid=1969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:09.886000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 12 19:24:09.887000 audit[1970]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=1970 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:24:09.887000 audit[1970]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffc6ebb40 a2=0 a3=ffff98f4b6c0 items=0 ppid=1866 pid=1970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:09.887000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 12 19:24:09.890000 audit[1972]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=1972 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:24:09.890000 audit[1972]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe0a0e2b0 a2=0 a3=ffffae2a56c0 items=0 ppid=1866 pid=1972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:09.890000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 12 19:24:09.891000 audit[1973]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=1973 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:24:09.891000 audit[1973]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffb16ce10 a2=0 a3=ffff9892c6c0 items=0 ppid=1866 pid=1973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:09.891000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 12 19:24:09.894000 audit[1975]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=1975 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:24:09.894000 audit[1975]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc81cfdb0 a2=0 a3=ffff9ac5d6c0 items=0 ppid=1866 pid=1975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:09.894000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 12 19:24:09.898000 audit[1978]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_rule pid=1978 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:24:09.898000 audit[1978]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc5a0dc90 a2=0 a3=ffffbbe266c0 items=0 ppid=1866 pid=1978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:09.898000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 12 19:24:09.902000 audit[1981]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_rule pid=1981 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:24:09.902000 audit[1981]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff7eac050 a2=0 a3=ffffa16cc6c0 items=0 ppid=1866 pid=1981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:09.902000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Feb 12 19:24:09.903000 audit[1982]: NETFILTER_CFG table=nat:74 family=10 entries=1 op=nft_register_chain pid=1982 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:24:09.903000 audit[1982]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd3eab000 a2=0 a3=ffff935986c0 items=0 ppid=1866 pid=1982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:09.903000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 12 19:24:09.906000 audit[1984]: NETFILTER_CFG table=nat:75 family=10 entries=2 op=nft_register_chain pid=1984 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:24:09.906000 audit[1984]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffea5310f0 a2=0 a3=ffffbab226c0 items=0 ppid=1866 pid=1984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:09.906000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 12 19:24:09.913000 audit[1987]: NETFILTER_CFG table=nat:76 family=10 entries=2 op=nft_register_chain pid=1987 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 19:24:09.913000 audit[1987]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffe888cf80 a2=0 a3=ffff7fd7c6c0 items=0 ppid=1866 pid=1987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:09.913000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 12 19:24:09.919000 audit[1991]: NETFILTER_CFG table=filter:77 family=10 entries=3 op=nft_register_rule pid=1991 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 12 19:24:09.919000 audit[1991]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffee7c1eb0 a2=0 a3=ffff9b2506c0 items=0 ppid=1866 pid=1991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:09.919000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:24:09.920000 audit[1991]: NETFILTER_CFG table=nat:78 family=10 entries=10 op=nft_register_chain pid=1991 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 12 19:24:09.920000 audit[1991]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1968 a0=3 a1=ffffee7c1eb0 a2=0 a3=ffff9b2506c0 items=0 ppid=1866 pid=1991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:09.920000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:24:09.979925 kubelet[1553]: E0212 19:24:09.979814 1553 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:24:10.010858 kubelet[1553]: I0212 19:24:10.010790 1553 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-df5fl" podStartSLOduration=-9.223372025844042e+09 pod.CreationTimestamp="2024-02-12 19:23:59 +0000 UTC" firstStartedPulling="2024-02-12 19:24:06.334759453 +0000 UTC m=+20.579187152" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:24:10.010425353 +0000 UTC m=+24.254853053" watchObservedRunningTime="2024-02-12 19:24:10.010733629 +0000 UTC m=+24.255161328" Feb 12 19:24:10.792805 kubelet[1553]: E0212 19:24:10.792758 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:10.874004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1239084178.mount: Deactivated successfully. Feb 12 19:24:10.936327 kubelet[1553]: E0212 19:24:10.936057 1553 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rklhx" podUID=e5010fa5-3c3f-473b-8a11-14f74264629a Feb 12 19:24:10.980906 kubelet[1553]: E0212 19:24:10.980860 1553 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:24:11.793601 kubelet[1553]: E0212 19:24:11.793557 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:12.641423 env[1211]: time="2024-02-12T19:24:12.641370141Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:12.644002 env[1211]: time="2024-02-12T19:24:12.643960211Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9c9318f5fbf505fc3d84676966009a3887e58ea1e3eac10039e5a96dfceb254b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:12.645243 env[1211]: time="2024-02-12T19:24:12.645211751Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:12.647134 env[1211]: time="2024-02-12T19:24:12.647065261Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:d943b4c23e82a39b0186a1a3b2fe8f728e543d503df72d7be521501a82b7e7b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:12.647535 env[1211]: time="2024-02-12T19:24:12.647500736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\" returns image reference \"sha256:9c9318f5fbf505fc3d84676966009a3887e58ea1e3eac10039e5a96dfceb254b\"" Feb 12 19:24:12.649872 env[1211]: time="2024-02-12T19:24:12.649827351Z" level=info msg="CreateContainer within sandbox \"3516b0e5138bec63c2c4eb8eba392f816d409015d41161082fa2538afe8f2763\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 12 19:24:12.660059 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3299754559.mount: Deactivated successfully. Feb 12 19:24:12.668851 env[1211]: time="2024-02-12T19:24:12.668789638Z" level=info msg="CreateContainer within sandbox \"3516b0e5138bec63c2c4eb8eba392f816d409015d41161082fa2538afe8f2763\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a9ec2b3fc85747b09c4eb78124bdce0fa94f8002650e19b1b6eb8481f26b3d63\"" Feb 12 19:24:12.669383 env[1211]: time="2024-02-12T19:24:12.669352977Z" level=info msg="StartContainer for \"a9ec2b3fc85747b09c4eb78124bdce0fa94f8002650e19b1b6eb8481f26b3d63\"" Feb 12 19:24:12.749346 env[1211]: time="2024-02-12T19:24:12.749266156Z" level=info msg="StartContainer for \"a9ec2b3fc85747b09c4eb78124bdce0fa94f8002650e19b1b6eb8481f26b3d63\" returns successfully" Feb 12 19:24:12.794441 kubelet[1553]: E0212 19:24:12.794380 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:12.937204 kubelet[1553]: E0212 19:24:12.936774 1553 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rklhx" podUID=e5010fa5-3c3f-473b-8a11-14f74264629a Feb 12 19:24:12.985815 kubelet[1553]: E0212 19:24:12.985405 1553 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:24:13.611342 env[1211]: time="2024-02-12T19:24:13.611265775Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:24:13.658276 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a9ec2b3fc85747b09c4eb78124bdce0fa94f8002650e19b1b6eb8481f26b3d63-rootfs.mount: Deactivated successfully. Feb 12 19:24:13.660359 kubelet[1553]: I0212 19:24:13.660326 1553 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 19:24:13.662529 env[1211]: time="2024-02-12T19:24:13.662489146Z" level=info msg="shim disconnected" id=a9ec2b3fc85747b09c4eb78124bdce0fa94f8002650e19b1b6eb8481f26b3d63 Feb 12 19:24:13.662970 env[1211]: time="2024-02-12T19:24:13.662948060Z" level=warning msg="cleaning up after shim disconnected" id=a9ec2b3fc85747b09c4eb78124bdce0fa94f8002650e19b1b6eb8481f26b3d63 namespace=k8s.io Feb 12 19:24:13.663044 env[1211]: time="2024-02-12T19:24:13.663031444Z" level=info msg="cleaning up dead shim" Feb 12 19:24:13.671277 env[1211]: time="2024-02-12T19:24:13.671232914Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:24:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2049 runtime=io.containerd.runc.v2\n" Feb 12 19:24:13.795264 kubelet[1553]: E0212 19:24:13.795192 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:13.989831 kubelet[1553]: E0212 19:24:13.989729 1553 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:24:13.991150 env[1211]: time="2024-02-12T19:24:13.991113792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\"" Feb 12 19:24:14.796233 kubelet[1553]: E0212 19:24:14.796181 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:14.940352 env[1211]: time="2024-02-12T19:24:14.940305330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rklhx,Uid:e5010fa5-3c3f-473b-8a11-14f74264629a,Namespace:calico-system,Attempt:0,}" Feb 12 19:24:15.264478 env[1211]: time="2024-02-12T19:24:15.264344332Z" level=error msg="Failed to destroy network for sandbox \"6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:24:15.264964 env[1211]: time="2024-02-12T19:24:15.264924210Z" level=error msg="encountered an error cleaning up failed sandbox \"6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:24:15.265084 env[1211]: time="2024-02-12T19:24:15.265056821Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rklhx,Uid:e5010fa5-3c3f-473b-8a11-14f74264629a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:24:15.265710 kubelet[1553]: E0212 19:24:15.265382 1553 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:24:15.265710 kubelet[1553]: E0212 19:24:15.265438 1553 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rklhx" Feb 12 19:24:15.265710 kubelet[1553]: E0212 19:24:15.265471 1553 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rklhx" Feb 12 19:24:15.265855 kubelet[1553]: E0212 19:24:15.265524 1553 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rklhx_calico-system(e5010fa5-3c3f-473b-8a11-14f74264629a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rklhx_calico-system(e5010fa5-3c3f-473b-8a11-14f74264629a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rklhx" podUID=e5010fa5-3c3f-473b-8a11-14f74264629a Feb 12 19:24:15.265962 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903-shm.mount: Deactivated successfully. Feb 12 19:24:15.796788 kubelet[1553]: E0212 19:24:15.796743 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:15.993346 kubelet[1553]: I0212 19:24:15.993139 1553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903" Feb 12 19:24:15.994038 env[1211]: time="2024-02-12T19:24:15.993996066Z" level=info msg="StopPodSandbox for \"6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903\"" Feb 12 19:24:16.015002 kubelet[1553]: I0212 19:24:16.014965 1553 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:24:16.049491 env[1211]: time="2024-02-12T19:24:16.049056567Z" level=error msg="StopPodSandbox for \"6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903\" failed" error="failed to destroy network for sandbox \"6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:24:16.049809 kubelet[1553]: E0212 19:24:16.049351 1553 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903" Feb 12 19:24:16.049809 kubelet[1553]: E0212 19:24:16.049414 1553 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903} Feb 12 19:24:16.049809 kubelet[1553]: E0212 19:24:16.049448 1553 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e5010fa5-3c3f-473b-8a11-14f74264629a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 12 19:24:16.049809 kubelet[1553]: E0212 19:24:16.049484 1553 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e5010fa5-3c3f-473b-8a11-14f74264629a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rklhx" podUID=e5010fa5-3c3f-473b-8a11-14f74264629a Feb 12 19:24:16.205026 kubelet[1553]: I0212 19:24:16.204989 1553 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht7h7\" (UniqueName: \"kubernetes.io/projected/5dabd81d-ad01-4f77-aeaa-b3a28fe0fd5b-kube-api-access-ht7h7\") pod \"nginx-deployment-8ffc5cf85-nsmp7\" (UID: \"5dabd81d-ad01-4f77-aeaa-b3a28fe0fd5b\") " pod="default/nginx-deployment-8ffc5cf85-nsmp7" Feb 12 19:24:16.618357 env[1211]: time="2024-02-12T19:24:16.618277469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-nsmp7,Uid:5dabd81d-ad01-4f77-aeaa-b3a28fe0fd5b,Namespace:default,Attempt:0,}" Feb 12 19:24:16.784331 env[1211]: time="2024-02-12T19:24:16.784242623Z" level=error msg="Failed to destroy network for sandbox \"2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:24:16.784655 env[1211]: time="2024-02-12T19:24:16.784613505Z" level=error msg="encountered an error cleaning up failed sandbox \"2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:24:16.784723 env[1211]: time="2024-02-12T19:24:16.784662817Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-nsmp7,Uid:5dabd81d-ad01-4f77-aeaa-b3a28fe0fd5b,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:24:16.785053 kubelet[1553]: E0212 19:24:16.785017 1553 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:24:16.785157 kubelet[1553]: E0212 19:24:16.785072 1553 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8ffc5cf85-nsmp7" Feb 12 19:24:16.785157 kubelet[1553]: E0212 19:24:16.785093 1553 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8ffc5cf85-nsmp7" Feb 12 19:24:16.785208 kubelet[1553]: E0212 19:24:16.785157 1553 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8ffc5cf85-nsmp7_default(5dabd81d-ad01-4f77-aeaa-b3a28fe0fd5b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8ffc5cf85-nsmp7_default(5dabd81d-ad01-4f77-aeaa-b3a28fe0fd5b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-nsmp7" podUID=5dabd81d-ad01-4f77-aeaa-b3a28fe0fd5b Feb 12 19:24:16.785884 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1-shm.mount: Deactivated successfully. Feb 12 19:24:16.798235 kubelet[1553]: E0212 19:24:16.797606 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:16.995827 kubelet[1553]: I0212 19:24:16.995147 1553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1" Feb 12 19:24:16.995947 env[1211]: time="2024-02-12T19:24:16.995611409Z" level=info msg="StopPodSandbox for \"2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1\"" Feb 12 19:24:17.019739 env[1211]: time="2024-02-12T19:24:17.019671125Z" level=error msg="StopPodSandbox for \"2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1\" failed" error="failed to destroy network for sandbox \"2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 19:24:17.019931 kubelet[1553]: E0212 19:24:17.019908 1553 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1" Feb 12 19:24:17.019993 kubelet[1553]: E0212 19:24:17.019954 1553 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1} Feb 12 19:24:17.019993 kubelet[1553]: E0212 19:24:17.019988 1553 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5dabd81d-ad01-4f77-aeaa-b3a28fe0fd5b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 12 19:24:17.020074 kubelet[1553]: E0212 19:24:17.020015 1553 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5dabd81d-ad01-4f77-aeaa-b3a28fe0fd5b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-nsmp7" podUID=5dabd81d-ad01-4f77-aeaa-b3a28fe0fd5b Feb 12 19:24:17.797878 kubelet[1553]: E0212 19:24:17.797829 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:18.800792 kubelet[1553]: E0212 19:24:18.800741 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:18.962200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount338306301.mount: Deactivated successfully. Feb 12 19:24:19.400475 env[1211]: time="2024-02-12T19:24:19.400412943Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:19.404792 env[1211]: time="2024-02-12T19:24:19.404733046Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c445639cb28807ced09724016dc3b273b170b14d3b3d0c39b1affa1cc6b68774,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:19.406367 env[1211]: time="2024-02-12T19:24:19.406328766Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:19.407622 env[1211]: time="2024-02-12T19:24:19.407575653Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:a45dffb21a0e9ca8962f36359a2ab776beeecd93843543c2fa1745d7bbb0f754,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:19.408195 env[1211]: time="2024-02-12T19:24:19.408160176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\" returns image reference \"sha256:c445639cb28807ced09724016dc3b273b170b14d3b3d0c39b1affa1cc6b68774\"" Feb 12 19:24:19.421263 env[1211]: time="2024-02-12T19:24:19.421199047Z" level=info msg="CreateContainer within sandbox \"3516b0e5138bec63c2c4eb8eba392f816d409015d41161082fa2538afe8f2763\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 12 19:24:19.487589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount270829415.mount: Deactivated successfully. Feb 12 19:24:19.500531 env[1211]: time="2024-02-12T19:24:19.500477654Z" level=info msg="CreateContainer within sandbox \"3516b0e5138bec63c2c4eb8eba392f816d409015d41161082fa2538afe8f2763\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ec2e65eee2f4c1e91af5ef4dbc6988941e554ca15996cd3e6064109abd8739f0\"" Feb 12 19:24:19.501424 env[1211]: time="2024-02-12T19:24:19.501387916Z" level=info msg="StartContainer for \"ec2e65eee2f4c1e91af5ef4dbc6988941e554ca15996cd3e6064109abd8739f0\"" Feb 12 19:24:19.564982 env[1211]: time="2024-02-12T19:24:19.564915756Z" level=info msg="StartContainer for \"ec2e65eee2f4c1e91af5ef4dbc6988941e554ca15996cd3e6064109abd8739f0\" returns successfully" Feb 12 19:24:19.717335 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 12 19:24:19.717476 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 12 19:24:19.801269 kubelet[1553]: E0212 19:24:19.801194 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:20.001978 kubelet[1553]: E0212 19:24:20.001863 1553 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:24:20.021057 kubelet[1553]: I0212 19:24:20.021011 1553 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-9jvlm" podStartSLOduration=-9.223372015833809e+09 pod.CreationTimestamp="2024-02-12 19:23:59 +0000 UTC" firstStartedPulling="2024-02-12 19:24:06.327100003 +0000 UTC m=+20.571527662" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:24:20.019673811 +0000 UTC m=+34.264101510" watchObservedRunningTime="2024-02-12 19:24:20.020967528 +0000 UTC m=+34.265395227" Feb 12 19:24:20.801890 kubelet[1553]: E0212 19:24:20.801837 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:20.985000 audit[2313]: AVC avc: denied { write } for pid=2313 comm="tee" name="fd" dev="proc" ino=14796 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 19:24:20.987768 kernel: kauditd_printk_skb: 122 callbacks suppressed Feb 12 19:24:20.987870 kernel: audit: type=1400 audit(1707765860.985:240): avc: denied { write } for pid=2313 comm="tee" name="fd" dev="proc" ino=14796 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 19:24:20.990000 audit[2321]: AVC avc: denied { write } for pid=2321 comm="tee" name="fd" dev="proc" ino=15435 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 19:24:20.990000 audit[2321]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe4578980 a2=241 a3=1b6 items=1 ppid=2261 pid=2321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:20.995555 kernel: audit: type=1400 audit(1707765860.990:241): avc: denied { write } for pid=2321 comm="tee" name="fd" dev="proc" ino=15435 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 19:24:20.995653 kernel: audit: type=1300 audit(1707765860.990:241): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe4578980 a2=241 a3=1b6 items=1 ppid=2261 pid=2321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:20.995674 kernel: audit: type=1307 audit(1707765860.990:241): cwd="/etc/service/enabled/node-status-reporter/log" Feb 12 19:24:20.990000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Feb 12 19:24:20.990000 audit: PATH item=0 name="/dev/fd/63" inode=15432 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:24:20.998351 kernel: audit: type=1302 audit(1707765860.990:241): item=0 name="/dev/fd/63" inode=15432 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:24:20.998400 kernel: audit: type=1327 audit(1707765860.990:241): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 19:24:20.990000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 19:24:20.993000 audit[2316]: AVC avc: denied { write } for pid=2316 comm="tee" name="fd" dev="proc" ino=14016 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 19:24:21.001777 kernel: audit: type=1400 audit(1707765860.993:242): avc: denied { write } for pid=2316 comm="tee" name="fd" dev="proc" ino=14016 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 19:24:20.993000 audit[2316]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffdca2598f a2=241 a3=1b6 items=1 ppid=2260 pid=2316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:21.004152 kubelet[1553]: I0212 19:24:21.002929 1553 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 12 19:24:21.004152 kubelet[1553]: E0212 19:24:21.003734 1553 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:24:21.004935 kernel: audit: type=1300 audit(1707765860.993:242): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffdca2598f a2=241 a3=1b6 items=1 ppid=2260 pid=2316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:20.993000 audit: CWD cwd="/etc/service/enabled/felix/log" Feb 12 19:24:21.008728 kernel: audit: type=1307 audit(1707765860.993:242): cwd="/etc/service/enabled/felix/log" Feb 12 19:24:21.008772 kernel: audit: type=1302 audit(1707765860.993:242): item=0 name="/dev/fd/63" inode=14012 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:24:20.993000 audit: PATH item=0 name="/dev/fd/63" inode=14012 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:24:20.993000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 19:24:20.994000 audit[2319]: AVC avc: denied { write } for pid=2319 comm="tee" name="fd" dev="proc" ino=14020 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 19:24:20.994000 audit[2319]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff5987991 a2=241 a3=1b6 items=1 ppid=2278 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:20.994000 audit: CWD cwd="/etc/service/enabled/cni/log" Feb 12 19:24:20.994000 audit: PATH item=0 name="/dev/fd/63" inode=14013 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:24:20.994000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 19:24:20.985000 audit[2313]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffed05a990 a2=241 a3=1b6 items=1 ppid=2269 pid=2313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:20.985000 audit: CWD cwd="/etc/service/enabled/bird/log" Feb 12 19:24:20.985000 audit: PATH item=0 name="/dev/fd/63" inode=14793 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:24:20.985000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 19:24:21.016000 audit[2326]: AVC avc: denied { write } for pid=2326 comm="tee" name="fd" dev="proc" ino=13274 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 19:24:21.016000 audit[2326]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc248f98f a2=241 a3=1b6 items=1 ppid=2268 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:21.016000 audit: CWD cwd="/etc/service/enabled/confd/log" Feb 12 19:24:21.016000 audit: PATH item=0 name="/dev/fd/63" inode=13266 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:24:21.016000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 19:24:21.028000 audit[2324]: AVC avc: denied { write } for pid=2324 comm="tee" name="fd" dev="proc" ino=13276 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 19:24:21.028000 audit[2324]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff269e97f a2=241 a3=1b6 items=1 ppid=2263 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:21.028000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Feb 12 19:24:21.028000 audit: PATH item=0 name="/dev/fd/63" inode=13263 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:24:21.028000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 19:24:21.035000 audit[2332]: AVC avc: denied { write } for pid=2332 comm="tee" name="fd" dev="proc" ino=14806 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 19:24:21.035000 audit[2332]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd9f8798f a2=241 a3=1b6 items=1 ppid=2280 pid=2332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:21.035000 audit: CWD cwd="/etc/service/enabled/bird6/log" Feb 12 19:24:21.035000 audit: PATH item=0 name="/dev/fd/63" inode=13269 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:24:21.035000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 19:24:21.137329 kernel: Initializing XFRM netlink socket Feb 12 19:24:21.263000 audit[2412]: AVC avc: denied { bpf } for pid=2412 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:24:21.263000 audit[2412]: AVC avc: denied { bpf } for pid=2412 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:24:21.263000 audit[2412]: AVC avc: denied { perfmon } for pid=2412 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:24:21.263000 audit[2412]: AVC avc: denied { perfmon } for pid=2412 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:24:21.263000 audit[2412]: AVC avc: denied { perfmon } for pid=2412 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:24:21.263000 audit[2412]: AVC avc: denied { perfmon } for pid=2412 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:24:21.263000 audit[2412]: AVC avc: denied { perfmon } for pid=2412 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:24:21.263000 audit[2412]: AVC avc: denied { bpf } for pid=2412 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:24:21.263000 audit[2412]: AVC avc: denied { bpf } for pid=2412 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:24:21.263000 audit: BPF prog-id=10 op=LOAD Feb 12 19:24:21.263000 audit[2412]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcfb66698 a2=70 a3=0 items=0 ppid=2264 pid=2412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:21.263000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 12 19:24:21.263000 audit: BPF prog-id=10 op=UNLOAD Feb 12 19:24:21.263000 audit[2412]: AVC avc: denied { bpf } for pid=2412 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:24:21.263000 audit[2412]: AVC avc: denied { bpf } for pid=2412 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:24:21.263000 audit[2412]: AVC avc: denied { perfmon } for pid=2412 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:24:21.263000 audit[2412]: AVC avc: denied { perfmon } for pid=2412 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:24:21.263000 audit[2412]: AVC avc: denied { perfmon } for pid=2412 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:24:21.263000 audit[2412]: AVC avc: denied { perfmon } for pid=2412 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:24:21.263000 audit[2412]: AVC avc: denied { perfmon } for pid=2412 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:24:21.263000 audit[2412]: AVC avc: denied { bpf } for pid=2412 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:24:21.263000 audit[2412]: AVC avc: denied { bpf } for pid=2412 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:24:21.263000 audit: BPF prog-id=11 op=LOAD Feb 12 19:24:21.263000 audit[2412]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcfb66698 a2=70 a3=4a174c items=0 ppid=2264 pid=2412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:21.263000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 12 19:24:21.263000 audit: BPF prog-id=11 op=UNLOAD Feb 12 19:24:21.263000 audit[2412]: AVC avc: denied { bpf } for pid=2412 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:24:21.263000 audit[2412]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=0 a1=ffffcfb666c8 a2=70 a3=f66079f items=0 ppid=2264 pid=2412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:21.263000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 12 19:24:21.263000 audit[2412]: AVC avc: denied { bpf } for pid=2412 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:24:21.263000 audit[2412]: AVC avc: denied { bpf } for pid=2412 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:24:21.263000 audit[2412]: AVC avc: denied { bpf } for pid=2412 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:24:21.263000 audit[2412]: AVC avc: denied { perfmon } for pid=2412 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:24:21.263000 audit[2412]: AVC avc: denied { perfmon } for pid=2412 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:24:21.263000 audit[2412]: AVC avc: denied { perfmon } for pid=2412 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:24:21.263000 audit[2412]: AVC avc: denied { perfmon } for pid=2412 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:24:21.263000 audit[2412]: AVC avc: denied { perfmon } for pid=2412 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:24:21.263000 audit[2412]: AVC avc: denied { bpf } for pid=2412 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:24:21.263000 audit[2412]: AVC avc: denied { bpf } for pid=2412 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:24:21.263000 audit: BPF prog-id=12 op=LOAD Feb 12 19:24:21.263000 audit[2412]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffcfb66618 a2=70 a3=f6607b9 items=0 ppid=2264 pid=2412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:21.263000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 12 19:24:21.266000 audit[2414]: AVC avc: denied { bpf } for pid=2414 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:24:21.266000 audit[2414]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=fffff6f0fb68 a2=70 a3=0 items=0 ppid=2264 pid=2414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:21.266000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 12 19:24:21.266000 audit[2414]: AVC avc: denied { bpf } for pid=2414 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 19:24:21.266000 audit[2414]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=fffff6f0fa48 a2=70 a3=2 items=0 ppid=2264 pid=2414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:21.266000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 12 19:24:21.278000 audit: BPF prog-id=12 op=UNLOAD Feb 12 19:24:21.327000 audit[2440]: NETFILTER_CFG table=mangle:79 family=2 entries=19 op=nft_register_chain pid=2440 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:24:21.327000 audit[2440]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6800 a0=3 a1=ffffe7fb6830 a2=0 a3=ffff8c224fa8 items=0 ppid=2264 pid=2440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:21.327000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:24:21.330000 audit[2439]: NETFILTER_CFG table=raw:80 family=2 entries=19 op=nft_register_chain pid=2439 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:24:21.330000 audit[2439]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6132 a0=3 a1=ffffd21d24f0 a2=0 a3=ffffbe7bbfa8 items=0 ppid=2264 pid=2439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:21.330000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:24:21.336000 audit[2445]: NETFILTER_CFG table=nat:81 family=2 entries=16 op=nft_register_chain pid=2445 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:24:21.336000 audit[2445]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5188 a0=3 a1=ffffcef63020 a2=0 a3=ffff82fd6fa8 items=0 ppid=2264 pid=2445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:21.336000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:24:21.337000 audit[2443]: NETFILTER_CFG table=filter:82 family=2 entries=39 op=nft_register_chain pid=2443 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:24:21.337000 audit[2443]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=18472 a0=3 a1=ffffdeb2f1f0 a2=0 a3=ffffbbd9cfa8 items=0 ppid=2264 pid=2443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:21.337000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:24:21.802640 kubelet[1553]: E0212 19:24:21.802592 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:22.149790 systemd-networkd[1096]: vxlan.calico: Link UP Feb 12 19:24:22.149799 systemd-networkd[1096]: vxlan.calico: Gained carrier Feb 12 19:24:22.803588 kubelet[1553]: E0212 19:24:22.803550 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:23.439432 systemd-networkd[1096]: vxlan.calico: Gained IPv6LL Feb 12 19:24:23.804386 kubelet[1553]: E0212 19:24:23.804326 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:24.804930 kubelet[1553]: E0212 19:24:24.804848 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:24.863306 update_engine[1200]: I0212 19:24:24.863216 1200 update_attempter.cc:509] Updating boot flags... Feb 12 19:24:25.805689 kubelet[1553]: E0212 19:24:25.805620 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:26.776660 kubelet[1553]: E0212 19:24:26.776592 1553 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:26.805796 kubelet[1553]: E0212 19:24:26.805740 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:27.806305 kubelet[1553]: E0212 19:24:27.806241 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:28.806657 kubelet[1553]: E0212 19:24:28.806608 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:29.807227 kubelet[1553]: E0212 19:24:29.807159 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:29.937192 env[1211]: time="2024-02-12T19:24:29.937135799Z" level=info msg="StopPodSandbox for \"6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903\"" Feb 12 19:24:29.937708 env[1211]: time="2024-02-12T19:24:29.937680740Z" level=info msg="StopPodSandbox for \"2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1\"" Feb 12 19:24:30.134428 env[1211]: 2024-02-12 19:24:30.002 [INFO][2507] k8s.go 578: Cleaning up netns ContainerID="6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903" Feb 12 19:24:30.134428 env[1211]: 2024-02-12 19:24:30.003 [INFO][2507] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903" iface="eth0" netns="/var/run/netns/cni-584b04fb-531c-89a3-994f-f9c2c26c66a0" Feb 12 19:24:30.134428 env[1211]: 2024-02-12 19:24:30.003 [INFO][2507] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903" iface="eth0" netns="/var/run/netns/cni-584b04fb-531c-89a3-994f-f9c2c26c66a0" Feb 12 19:24:30.134428 env[1211]: 2024-02-12 19:24:30.003 [INFO][2507] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903" iface="eth0" netns="/var/run/netns/cni-584b04fb-531c-89a3-994f-f9c2c26c66a0" Feb 12 19:24:30.134428 env[1211]: 2024-02-12 19:24:30.003 [INFO][2507] k8s.go 585: Releasing IP address(es) ContainerID="6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903" Feb 12 19:24:30.134428 env[1211]: 2024-02-12 19:24:30.003 [INFO][2507] utils.go 188: Calico CNI releasing IP address ContainerID="6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903" Feb 12 19:24:30.134428 env[1211]: 2024-02-12 19:24:30.114 [INFO][2521] ipam_plugin.go 415: Releasing address using handleID ContainerID="6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903" HandleID="k8s-pod-network.6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903" Workload="10.0.0.89-k8s-csi--node--driver--rklhx-eth0" Feb 12 19:24:30.134428 env[1211]: 2024-02-12 19:24:30.114 [INFO][2521] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:24:30.134428 env[1211]: 2024-02-12 19:24:30.114 [INFO][2521] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:24:30.134428 env[1211]: 2024-02-12 19:24:30.127 [WARNING][2521] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903" HandleID="k8s-pod-network.6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903" Workload="10.0.0.89-k8s-csi--node--driver--rklhx-eth0" Feb 12 19:24:30.134428 env[1211]: 2024-02-12 19:24:30.127 [INFO][2521] ipam_plugin.go 443: Releasing address using workloadID ContainerID="6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903" HandleID="k8s-pod-network.6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903" Workload="10.0.0.89-k8s-csi--node--driver--rklhx-eth0" Feb 12 19:24:30.134428 env[1211]: 2024-02-12 19:24:30.129 [INFO][2521] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:24:30.134428 env[1211]: 2024-02-12 19:24:30.131 [INFO][2507] k8s.go 591: Teardown processing complete. ContainerID="6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903" Feb 12 19:24:30.134580 systemd[1]: run-netns-cni\x2d584b04fb\x2d531c\x2d89a3\x2d994f\x2df9c2c26c66a0.mount: Deactivated successfully. Feb 12 19:24:30.135278 env[1211]: time="2024-02-12T19:24:30.135230461Z" level=info msg="TearDown network for sandbox \"6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903\" successfully" Feb 12 19:24:30.135278 env[1211]: time="2024-02-12T19:24:30.135274235Z" level=info msg="StopPodSandbox for \"6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903\" returns successfully" Feb 12 19:24:30.136058 env[1211]: time="2024-02-12T19:24:30.136008068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rklhx,Uid:e5010fa5-3c3f-473b-8a11-14f74264629a,Namespace:calico-system,Attempt:1,}" Feb 12 19:24:30.150165 env[1211]: 2024-02-12 19:24:30.001 [INFO][2502] k8s.go 578: Cleaning up netns ContainerID="2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1" Feb 12 19:24:30.150165 env[1211]: 2024-02-12 19:24:30.002 [INFO][2502] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1" iface="eth0" netns="/var/run/netns/cni-c73875d3-a020-3e79-7e7b-8b03c1ea19f0" Feb 12 19:24:30.150165 env[1211]: 2024-02-12 19:24:30.002 [INFO][2502] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1" iface="eth0" netns="/var/run/netns/cni-c73875d3-a020-3e79-7e7b-8b03c1ea19f0" Feb 12 19:24:30.150165 env[1211]: 2024-02-12 19:24:30.002 [INFO][2502] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1" iface="eth0" netns="/var/run/netns/cni-c73875d3-a020-3e79-7e7b-8b03c1ea19f0" Feb 12 19:24:30.150165 env[1211]: 2024-02-12 19:24:30.002 [INFO][2502] k8s.go 585: Releasing IP address(es) ContainerID="2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1" Feb 12 19:24:30.150165 env[1211]: 2024-02-12 19:24:30.002 [INFO][2502] utils.go 188: Calico CNI releasing IP address ContainerID="2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1" Feb 12 19:24:30.150165 env[1211]: 2024-02-12 19:24:30.114 [INFO][2520] ipam_plugin.go 415: Releasing address using handleID ContainerID="2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1" HandleID="k8s-pod-network.2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1" Workload="10.0.0.89-k8s-nginx--deployment--8ffc5cf85--nsmp7-eth0" Feb 12 19:24:30.150165 env[1211]: 2024-02-12 19:24:30.114 [INFO][2520] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:24:30.150165 env[1211]: 2024-02-12 19:24:30.129 [INFO][2520] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:24:30.150165 env[1211]: 2024-02-12 19:24:30.144 [WARNING][2520] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1" HandleID="k8s-pod-network.2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1" Workload="10.0.0.89-k8s-nginx--deployment--8ffc5cf85--nsmp7-eth0" Feb 12 19:24:30.150165 env[1211]: 2024-02-12 19:24:30.144 [INFO][2520] ipam_plugin.go 443: Releasing address using workloadID ContainerID="2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1" HandleID="k8s-pod-network.2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1" Workload="10.0.0.89-k8s-nginx--deployment--8ffc5cf85--nsmp7-eth0" Feb 12 19:24:30.150165 env[1211]: 2024-02-12 19:24:30.147 [INFO][2520] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:24:30.150165 env[1211]: 2024-02-12 19:24:30.148 [INFO][2502] k8s.go 591: Teardown processing complete. ContainerID="2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1" Feb 12 19:24:30.156481 env[1211]: time="2024-02-12T19:24:30.150342536Z" level=info msg="TearDown network for sandbox \"2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1\" successfully" Feb 12 19:24:30.156481 env[1211]: time="2024-02-12T19:24:30.150372345Z" level=info msg="StopPodSandbox for \"2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1\" returns successfully" Feb 12 19:24:30.156481 env[1211]: time="2024-02-12T19:24:30.152511504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-nsmp7,Uid:5dabd81d-ad01-4f77-aeaa-b3a28fe0fd5b,Namespace:default,Attempt:1,}" Feb 12 19:24:30.151711 systemd[1]: run-netns-cni\x2dc73875d3\x2da020\x2d3e79\x2d7e7b\x2d8b03c1ea19f0.mount: Deactivated successfully. Feb 12 19:24:30.283314 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:24:30.283417 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calid0518dab8e5: link becomes ready Feb 12 19:24:30.281783 systemd-networkd[1096]: calid0518dab8e5: Link UP Feb 12 19:24:30.281945 systemd-networkd[1096]: calid0518dab8e5: Gained carrier Feb 12 19:24:30.291390 env[1211]: 2024-02-12 19:24:30.204 [INFO][2548] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.89-k8s-nginx--deployment--8ffc5cf85--nsmp7-eth0 nginx-deployment-8ffc5cf85- default 5dabd81d-ad01-4f77-aeaa-b3a28fe0fd5b 954 0 2024-02-12 19:24:16 +0000 UTC map[app:nginx pod-template-hash:8ffc5cf85 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.89 nginx-deployment-8ffc5cf85-nsmp7 eth0 default [] [] [kns.default ksa.default.default] calid0518dab8e5 [] []}} ContainerID="421dcd80865685c1c1f392291659b85bbda3f6c6b3173794b8284aaa3b9cf202" Namespace="default" Pod="nginx-deployment-8ffc5cf85-nsmp7" WorkloadEndpoint="10.0.0.89-k8s-nginx--deployment--8ffc5cf85--nsmp7-" Feb 12 19:24:30.291390 env[1211]: 2024-02-12 19:24:30.204 [INFO][2548] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="421dcd80865685c1c1f392291659b85bbda3f6c6b3173794b8284aaa3b9cf202" Namespace="default" Pod="nginx-deployment-8ffc5cf85-nsmp7" WorkloadEndpoint="10.0.0.89-k8s-nginx--deployment--8ffc5cf85--nsmp7-eth0" Feb 12 19:24:30.291390 env[1211]: 2024-02-12 19:24:30.230 [INFO][2570] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="421dcd80865685c1c1f392291659b85bbda3f6c6b3173794b8284aaa3b9cf202" HandleID="k8s-pod-network.421dcd80865685c1c1f392291659b85bbda3f6c6b3173794b8284aaa3b9cf202" Workload="10.0.0.89-k8s-nginx--deployment--8ffc5cf85--nsmp7-eth0" Feb 12 19:24:30.291390 env[1211]: 2024-02-12 19:24:30.244 [INFO][2570] ipam_plugin.go 268: Auto assigning IP ContainerID="421dcd80865685c1c1f392291659b85bbda3f6c6b3173794b8284aaa3b9cf202" HandleID="k8s-pod-network.421dcd80865685c1c1f392291659b85bbda3f6c6b3173794b8284aaa3b9cf202" Workload="10.0.0.89-k8s-nginx--deployment--8ffc5cf85--nsmp7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400029dcb0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.89", "pod":"nginx-deployment-8ffc5cf85-nsmp7", "timestamp":"2024-02-12 19:24:30.230734803 +0000 UTC"}, Hostname:"10.0.0.89", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 12 19:24:30.291390 env[1211]: 2024-02-12 19:24:30.244 [INFO][2570] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:24:30.291390 env[1211]: 2024-02-12 19:24:30.244 [INFO][2570] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:24:30.291390 env[1211]: 2024-02-12 19:24:30.245 [INFO][2570] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.89' Feb 12 19:24:30.291390 env[1211]: 2024-02-12 19:24:30.248 [INFO][2570] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.421dcd80865685c1c1f392291659b85bbda3f6c6b3173794b8284aaa3b9cf202" host="10.0.0.89" Feb 12 19:24:30.291390 env[1211]: 2024-02-12 19:24:30.253 [INFO][2570] ipam.go 372: Looking up existing affinities for host host="10.0.0.89" Feb 12 19:24:30.291390 env[1211]: 2024-02-12 19:24:30.260 [INFO][2570] ipam.go 489: Trying affinity for 192.168.98.0/26 host="10.0.0.89" Feb 12 19:24:30.291390 env[1211]: 2024-02-12 19:24:30.262 [INFO][2570] ipam.go 155: Attempting to load block cidr=192.168.98.0/26 host="10.0.0.89" Feb 12 19:24:30.291390 env[1211]: 2024-02-12 19:24:30.265 [INFO][2570] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.98.0/26 host="10.0.0.89" Feb 12 19:24:30.291390 env[1211]: 2024-02-12 19:24:30.265 [INFO][2570] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.98.0/26 handle="k8s-pod-network.421dcd80865685c1c1f392291659b85bbda3f6c6b3173794b8284aaa3b9cf202" host="10.0.0.89" Feb 12 19:24:30.291390 env[1211]: 2024-02-12 19:24:30.266 [INFO][2570] ipam.go 1682: Creating new handle: k8s-pod-network.421dcd80865685c1c1f392291659b85bbda3f6c6b3173794b8284aaa3b9cf202 Feb 12 19:24:30.291390 env[1211]: 2024-02-12 19:24:30.269 [INFO][2570] ipam.go 1203: Writing block in order to claim IPs block=192.168.98.0/26 handle="k8s-pod-network.421dcd80865685c1c1f392291659b85bbda3f6c6b3173794b8284aaa3b9cf202" host="10.0.0.89" Feb 12 19:24:30.291390 env[1211]: 2024-02-12 19:24:30.274 [INFO][2570] ipam.go 1216: Successfully claimed IPs: [192.168.98.1/26] block=192.168.98.0/26 handle="k8s-pod-network.421dcd80865685c1c1f392291659b85bbda3f6c6b3173794b8284aaa3b9cf202" host="10.0.0.89" Feb 12 19:24:30.291390 env[1211]: 2024-02-12 19:24:30.274 [INFO][2570] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.98.1/26] handle="k8s-pod-network.421dcd80865685c1c1f392291659b85bbda3f6c6b3173794b8284aaa3b9cf202" host="10.0.0.89" Feb 12 19:24:30.291390 env[1211]: 2024-02-12 19:24:30.274 [INFO][2570] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:24:30.291390 env[1211]: 2024-02-12 19:24:30.274 [INFO][2570] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.98.1/26] IPv6=[] ContainerID="421dcd80865685c1c1f392291659b85bbda3f6c6b3173794b8284aaa3b9cf202" HandleID="k8s-pod-network.421dcd80865685c1c1f392291659b85bbda3f6c6b3173794b8284aaa3b9cf202" Workload="10.0.0.89-k8s-nginx--deployment--8ffc5cf85--nsmp7-eth0" Feb 12 19:24:30.291932 env[1211]: 2024-02-12 19:24:30.276 [INFO][2548] k8s.go 385: Populated endpoint ContainerID="421dcd80865685c1c1f392291659b85bbda3f6c6b3173794b8284aaa3b9cf202" Namespace="default" Pod="nginx-deployment-8ffc5cf85-nsmp7" WorkloadEndpoint="10.0.0.89-k8s-nginx--deployment--8ffc5cf85--nsmp7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89-k8s-nginx--deployment--8ffc5cf85--nsmp7-eth0", GenerateName:"nginx-deployment-8ffc5cf85-", Namespace:"default", SelfLink:"", UID:"5dabd81d-ad01-4f77-aeaa-b3a28fe0fd5b", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 24, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8ffc5cf85", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.89", ContainerID:"", Pod:"nginx-deployment-8ffc5cf85-nsmp7", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.98.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calid0518dab8e5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:24:30.291932 env[1211]: 2024-02-12 19:24:30.276 [INFO][2548] k8s.go 386: Calico CNI using IPs: [192.168.98.1/32] ContainerID="421dcd80865685c1c1f392291659b85bbda3f6c6b3173794b8284aaa3b9cf202" Namespace="default" Pod="nginx-deployment-8ffc5cf85-nsmp7" WorkloadEndpoint="10.0.0.89-k8s-nginx--deployment--8ffc5cf85--nsmp7-eth0" Feb 12 19:24:30.291932 env[1211]: 2024-02-12 19:24:30.276 [INFO][2548] dataplane_linux.go 68: Setting the host side veth name to calid0518dab8e5 ContainerID="421dcd80865685c1c1f392291659b85bbda3f6c6b3173794b8284aaa3b9cf202" Namespace="default" Pod="nginx-deployment-8ffc5cf85-nsmp7" WorkloadEndpoint="10.0.0.89-k8s-nginx--deployment--8ffc5cf85--nsmp7-eth0" Feb 12 19:24:30.291932 env[1211]: 2024-02-12 19:24:30.281 [INFO][2548] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="421dcd80865685c1c1f392291659b85bbda3f6c6b3173794b8284aaa3b9cf202" Namespace="default" Pod="nginx-deployment-8ffc5cf85-nsmp7" WorkloadEndpoint="10.0.0.89-k8s-nginx--deployment--8ffc5cf85--nsmp7-eth0" Feb 12 19:24:30.291932 env[1211]: 2024-02-12 19:24:30.285 [INFO][2548] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="421dcd80865685c1c1f392291659b85bbda3f6c6b3173794b8284aaa3b9cf202" Namespace="default" Pod="nginx-deployment-8ffc5cf85-nsmp7" WorkloadEndpoint="10.0.0.89-k8s-nginx--deployment--8ffc5cf85--nsmp7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89-k8s-nginx--deployment--8ffc5cf85--nsmp7-eth0", GenerateName:"nginx-deployment-8ffc5cf85-", Namespace:"default", SelfLink:"", UID:"5dabd81d-ad01-4f77-aeaa-b3a28fe0fd5b", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 24, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8ffc5cf85", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.89", ContainerID:"421dcd80865685c1c1f392291659b85bbda3f6c6b3173794b8284aaa3b9cf202", Pod:"nginx-deployment-8ffc5cf85-nsmp7", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.98.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calid0518dab8e5", MAC:"c6:dc:50:d1:45:7b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:24:30.291932 env[1211]: 2024-02-12 19:24:30.290 [INFO][2548] k8s.go 491: Wrote updated endpoint to datastore ContainerID="421dcd80865685c1c1f392291659b85bbda3f6c6b3173794b8284aaa3b9cf202" Namespace="default" Pod="nginx-deployment-8ffc5cf85-nsmp7" WorkloadEndpoint="10.0.0.89-k8s-nginx--deployment--8ffc5cf85--nsmp7-eth0" Feb 12 19:24:30.305000 audit[2597]: NETFILTER_CFG table=filter:83 family=2 entries=36 op=nft_register_chain pid=2597 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:24:30.307720 kernel: kauditd_printk_skb: 86 callbacks suppressed Feb 12 19:24:30.307779 kernel: audit: type=1325 audit(1707765870.305:260): table=filter:83 family=2 entries=36 op=nft_register_chain pid=2597 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:24:30.305000 audit[2597]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19876 a0=3 a1=ffffeb411230 a2=0 a3=ffffb45f6fa8 items=0 ppid=2264 pid=2597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:30.311528 env[1211]: time="2024-02-12T19:24:30.311327334Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:24:30.311528 env[1211]: time="2024-02-12T19:24:30.311371628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:24:30.311528 env[1211]: time="2024-02-12T19:24:30.311381912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:24:30.312430 env[1211]: time="2024-02-12T19:24:30.311750108Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/421dcd80865685c1c1f392291659b85bbda3f6c6b3173794b8284aaa3b9cf202 pid=2606 runtime=io.containerd.runc.v2 Feb 12 19:24:30.312574 kernel: audit: type=1300 audit(1707765870.305:260): arch=c00000b7 syscall=211 success=yes exit=19876 a0=3 a1=ffffeb411230 a2=0 a3=ffffb45f6fa8 items=0 ppid=2264 pid=2597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:30.305000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:24:30.314387 kernel: audit: type=1327 audit(1707765870.305:260): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:24:30.336875 systemd-networkd[1096]: cali423b827255e: Link UP Feb 12 19:24:30.338120 systemd-networkd[1096]: cali423b827255e: Gained carrier Feb 12 19:24:30.338357 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali423b827255e: link becomes ready Feb 12 19:24:30.346793 systemd-resolved[1154]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 19:24:30.352598 env[1211]: 2024-02-12 19:24:30.198 [INFO][2535] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.89-k8s-csi--node--driver--rklhx-eth0 csi-node-driver- calico-system e5010fa5-3c3f-473b-8a11-14f74264629a 955 0 2024-02-12 19:23:59 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7c77f88967 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.89 csi-node-driver-rklhx eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali423b827255e [] []}} ContainerID="3d3e0ef7c3567ae8d17be4ac07d953095bcbbd405304736f8d36a7db70cf0859" Namespace="calico-system" Pod="csi-node-driver-rklhx" WorkloadEndpoint="10.0.0.89-k8s-csi--node--driver--rklhx-" Feb 12 19:24:30.352598 env[1211]: 2024-02-12 19:24:30.199 [INFO][2535] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="3d3e0ef7c3567ae8d17be4ac07d953095bcbbd405304736f8d36a7db70cf0859" Namespace="calico-system" Pod="csi-node-driver-rklhx" WorkloadEndpoint="10.0.0.89-k8s-csi--node--driver--rklhx-eth0" Feb 12 19:24:30.352598 env[1211]: 2024-02-12 19:24:30.223 [INFO][2563] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3d3e0ef7c3567ae8d17be4ac07d953095bcbbd405304736f8d36a7db70cf0859" HandleID="k8s-pod-network.3d3e0ef7c3567ae8d17be4ac07d953095bcbbd405304736f8d36a7db70cf0859" Workload="10.0.0.89-k8s-csi--node--driver--rklhx-eth0" Feb 12 19:24:30.352598 env[1211]: 2024-02-12 19:24:30.245 [INFO][2563] ipam_plugin.go 268: Auto assigning IP ContainerID="3d3e0ef7c3567ae8d17be4ac07d953095bcbbd405304736f8d36a7db70cf0859" HandleID="k8s-pod-network.3d3e0ef7c3567ae8d17be4ac07d953095bcbbd405304736f8d36a7db70cf0859" Workload="10.0.0.89-k8s-csi--node--driver--rklhx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002439b0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.89", "pod":"csi-node-driver-rklhx", "timestamp":"2024-02-12 19:24:30.223214617 +0000 UTC"}, Hostname:"10.0.0.89", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 12 19:24:30.352598 env[1211]: 2024-02-12 19:24:30.245 [INFO][2563] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:24:30.352598 env[1211]: 2024-02-12 19:24:30.274 [INFO][2563] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:24:30.352598 env[1211]: 2024-02-12 19:24:30.274 [INFO][2563] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.89' Feb 12 19:24:30.352598 env[1211]: 2024-02-12 19:24:30.276 [INFO][2563] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3d3e0ef7c3567ae8d17be4ac07d953095bcbbd405304736f8d36a7db70cf0859" host="10.0.0.89" Feb 12 19:24:30.352598 env[1211]: 2024-02-12 19:24:30.288 [INFO][2563] ipam.go 372: Looking up existing affinities for host host="10.0.0.89" Feb 12 19:24:30.352598 env[1211]: 2024-02-12 19:24:30.302 [INFO][2563] ipam.go 489: Trying affinity for 192.168.98.0/26 host="10.0.0.89" Feb 12 19:24:30.352598 env[1211]: 2024-02-12 19:24:30.304 [INFO][2563] ipam.go 155: Attempting to load block cidr=192.168.98.0/26 host="10.0.0.89" Feb 12 19:24:30.352598 env[1211]: 2024-02-12 19:24:30.313 [INFO][2563] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.98.0/26 host="10.0.0.89" Feb 12 19:24:30.352598 env[1211]: 2024-02-12 19:24:30.313 [INFO][2563] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.98.0/26 handle="k8s-pod-network.3d3e0ef7c3567ae8d17be4ac07d953095bcbbd405304736f8d36a7db70cf0859" host="10.0.0.89" Feb 12 19:24:30.352598 env[1211]: 2024-02-12 19:24:30.315 [INFO][2563] ipam.go 1682: Creating new handle: k8s-pod-network.3d3e0ef7c3567ae8d17be4ac07d953095bcbbd405304736f8d36a7db70cf0859 Feb 12 19:24:30.352598 env[1211]: 2024-02-12 19:24:30.319 [INFO][2563] ipam.go 1203: Writing block in order to claim IPs block=192.168.98.0/26 handle="k8s-pod-network.3d3e0ef7c3567ae8d17be4ac07d953095bcbbd405304736f8d36a7db70cf0859" host="10.0.0.89" Feb 12 19:24:30.352598 env[1211]: 2024-02-12 19:24:30.324 [INFO][2563] ipam.go 1216: Successfully claimed IPs: [192.168.98.2/26] block=192.168.98.0/26 handle="k8s-pod-network.3d3e0ef7c3567ae8d17be4ac07d953095bcbbd405304736f8d36a7db70cf0859" host="10.0.0.89" Feb 12 19:24:30.352598 env[1211]: 2024-02-12 19:24:30.324 [INFO][2563] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.98.2/26] handle="k8s-pod-network.3d3e0ef7c3567ae8d17be4ac07d953095bcbbd405304736f8d36a7db70cf0859" host="10.0.0.89" Feb 12 19:24:30.352598 env[1211]: 2024-02-12 19:24:30.324 [INFO][2563] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:24:30.352598 env[1211]: 2024-02-12 19:24:30.324 [INFO][2563] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.98.2/26] IPv6=[] ContainerID="3d3e0ef7c3567ae8d17be4ac07d953095bcbbd405304736f8d36a7db70cf0859" HandleID="k8s-pod-network.3d3e0ef7c3567ae8d17be4ac07d953095bcbbd405304736f8d36a7db70cf0859" Workload="10.0.0.89-k8s-csi--node--driver--rklhx-eth0" Feb 12 19:24:30.353223 env[1211]: 2024-02-12 19:24:30.327 [INFO][2535] k8s.go 385: Populated endpoint ContainerID="3d3e0ef7c3567ae8d17be4ac07d953095bcbbd405304736f8d36a7db70cf0859" Namespace="calico-system" Pod="csi-node-driver-rklhx" WorkloadEndpoint="10.0.0.89-k8s-csi--node--driver--rklhx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89-k8s-csi--node--driver--rklhx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e5010fa5-3c3f-473b-8a11-14f74264629a", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 23, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.89", ContainerID:"", Pod:"csi-node-driver-rklhx", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.98.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali423b827255e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:24:30.353223 env[1211]: 2024-02-12 19:24:30.327 [INFO][2535] k8s.go 386: Calico CNI using IPs: [192.168.98.2/32] ContainerID="3d3e0ef7c3567ae8d17be4ac07d953095bcbbd405304736f8d36a7db70cf0859" Namespace="calico-system" Pod="csi-node-driver-rklhx" WorkloadEndpoint="10.0.0.89-k8s-csi--node--driver--rklhx-eth0" Feb 12 19:24:30.353223 env[1211]: 2024-02-12 19:24:30.327 [INFO][2535] dataplane_linux.go 68: Setting the host side veth name to cali423b827255e ContainerID="3d3e0ef7c3567ae8d17be4ac07d953095bcbbd405304736f8d36a7db70cf0859" Namespace="calico-system" Pod="csi-node-driver-rklhx" WorkloadEndpoint="10.0.0.89-k8s-csi--node--driver--rklhx-eth0" Feb 12 19:24:30.353223 env[1211]: 2024-02-12 19:24:30.338 [INFO][2535] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="3d3e0ef7c3567ae8d17be4ac07d953095bcbbd405304736f8d36a7db70cf0859" Namespace="calico-system" Pod="csi-node-driver-rklhx" WorkloadEndpoint="10.0.0.89-k8s-csi--node--driver--rklhx-eth0" Feb 12 19:24:30.353223 env[1211]: 2024-02-12 19:24:30.343 [INFO][2535] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="3d3e0ef7c3567ae8d17be4ac07d953095bcbbd405304736f8d36a7db70cf0859" Namespace="calico-system" Pod="csi-node-driver-rklhx" WorkloadEndpoint="10.0.0.89-k8s-csi--node--driver--rklhx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89-k8s-csi--node--driver--rklhx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e5010fa5-3c3f-473b-8a11-14f74264629a", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 23, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.89", ContainerID:"3d3e0ef7c3567ae8d17be4ac07d953095bcbbd405304736f8d36a7db70cf0859", Pod:"csi-node-driver-rklhx", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.98.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali423b827255e", MAC:"82:6d:a1:e5:13:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:24:30.353223 env[1211]: 2024-02-12 19:24:30.351 [INFO][2535] k8s.go 491: Wrote updated endpoint to datastore ContainerID="3d3e0ef7c3567ae8d17be4ac07d953095bcbbd405304736f8d36a7db70cf0859" Namespace="calico-system" Pod="csi-node-driver-rklhx" WorkloadEndpoint="10.0.0.89-k8s-csi--node--driver--rklhx-eth0" Feb 12 19:24:30.366484 env[1211]: time="2024-02-12T19:24:30.366279050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-nsmp7,Uid:5dabd81d-ad01-4f77-aeaa-b3a28fe0fd5b,Namespace:default,Attempt:1,} returns sandbox id \"421dcd80865685c1c1f392291659b85bbda3f6c6b3173794b8284aaa3b9cf202\"" Feb 12 19:24:30.367936 env[1211]: time="2024-02-12T19:24:30.367907166Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 12 19:24:30.370000 audit[2657]: NETFILTER_CFG table=filter:84 family=2 entries=40 op=nft_register_chain pid=2657 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:24:30.370000 audit[2657]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=21096 a0=3 a1=fffff0e2b280 a2=0 a3=ffffa5234fa8 items=0 ppid=2264 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:30.374634 env[1211]: time="2024-02-12T19:24:30.374554876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:24:30.374634 env[1211]: time="2024-02-12T19:24:30.374593688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:24:30.374634 env[1211]: time="2024-02-12T19:24:30.374605972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:24:30.374858 kernel: audit: type=1325 audit(1707765870.370:261): table=filter:84 family=2 entries=40 op=nft_register_chain pid=2657 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:24:30.374914 kernel: audit: type=1300 audit(1707765870.370:261): arch=c00000b7 syscall=211 success=yes exit=21096 a0=3 a1=fffff0e2b280 a2=0 a3=ffffa5234fa8 items=0 ppid=2264 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:30.374937 kernel: audit: type=1327 audit(1707765870.370:261): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:24:30.370000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:24:30.375113 env[1211]: time="2024-02-12T19:24:30.375071239Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3d3e0ef7c3567ae8d17be4ac07d953095bcbbd405304736f8d36a7db70cf0859 pid=2660 runtime=io.containerd.runc.v2 Feb 12 19:24:30.445483 systemd-resolved[1154]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 19:24:30.459977 env[1211]: time="2024-02-12T19:24:30.459929964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rklhx,Uid:e5010fa5-3c3f-473b-8a11-14f74264629a,Namespace:calico-system,Attempt:1,} returns sandbox id \"3d3e0ef7c3567ae8d17be4ac07d953095bcbbd405304736f8d36a7db70cf0859\"" Feb 12 19:24:30.805444 kubelet[1553]: I0212 19:24:30.805400 1553 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 12 19:24:30.806196 kubelet[1553]: E0212 19:24:30.806167 1553 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:24:30.807769 kubelet[1553]: E0212 19:24:30.807736 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:31.028251 kubelet[1553]: E0212 19:24:31.028221 1553 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 19:24:31.503498 systemd-networkd[1096]: calid0518dab8e5: Gained IPv6LL Feb 12 19:24:31.808636 kubelet[1553]: E0212 19:24:31.808593 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:32.079470 systemd-networkd[1096]: cali423b827255e: Gained IPv6LL Feb 12 19:24:32.688225 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3964674417.mount: Deactivated successfully. Feb 12 19:24:32.809590 kubelet[1553]: E0212 19:24:32.809539 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:33.722864 env[1211]: time="2024-02-12T19:24:33.722782593Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:33.724302 env[1211]: time="2024-02-12T19:24:33.724255602Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:33.726217 env[1211]: time="2024-02-12T19:24:33.726185457Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:33.728005 env[1211]: time="2024-02-12T19:24:33.727970552Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:33.729671 env[1211]: time="2024-02-12T19:24:33.729616289Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b\"" Feb 12 19:24:33.730440 env[1211]: time="2024-02-12T19:24:33.730412350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\"" Feb 12 19:24:33.731314 env[1211]: time="2024-02-12T19:24:33.731228336Z" level=info msg="CreateContainer within sandbox \"421dcd80865685c1c1f392291659b85bbda3f6c6b3173794b8284aaa3b9cf202\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 12 19:24:33.745182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3255898970.mount: Deactivated successfully. Feb 12 19:24:33.748311 env[1211]: time="2024-02-12T19:24:33.748249337Z" level=info msg="CreateContainer within sandbox \"421dcd80865685c1c1f392291659b85bbda3f6c6b3173794b8284aaa3b9cf202\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"dcaa00cb545408521d8719b1ac88b2f271c674cd6d988a15b27230fe6aca9e91\"" Feb 12 19:24:33.749345 env[1211]: time="2024-02-12T19:24:33.749249015Z" level=info msg="StartContainer for \"dcaa00cb545408521d8719b1ac88b2f271c674cd6d988a15b27230fe6aca9e91\"" Feb 12 19:24:33.815099 kubelet[1553]: E0212 19:24:33.810005 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:33.815477 env[1211]: time="2024-02-12T19:24:33.811931202Z" level=info msg="StartContainer for \"dcaa00cb545408521d8719b1ac88b2f271c674cd6d988a15b27230fe6aca9e91\" returns successfully" Feb 12 19:24:34.044572 kubelet[1553]: I0212 19:24:34.044470 1553 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-8ffc5cf85-nsmp7" podStartSLOduration=-9.223372018810347e+09 pod.CreationTimestamp="2024-02-12 19:24:16 +0000 UTC" firstStartedPulling="2024-02-12 19:24:30.367409128 +0000 UTC m=+44.611836827" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:24:34.043093729 +0000 UTC m=+48.287521428" watchObservedRunningTime="2024-02-12 19:24:34.044428803 +0000 UTC m=+48.288856502" Feb 12 19:24:34.810210 kubelet[1553]: E0212 19:24:34.810162 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:34.933726 env[1211]: time="2024-02-12T19:24:34.933685652Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:34.936516 env[1211]: time="2024-02-12T19:24:34.936480475Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4b71e7439e0eba34a97844591560a009f37e8e6c17a386a34d416c1cc872dee8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:34.937718 env[1211]: time="2024-02-12T19:24:34.937690836Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:34.939938 env[1211]: time="2024-02-12T19:24:34.939895542Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:2b9021393c17e87ba8a3c89f5b3719941812f4e4751caa0b71eb2233bff48738,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:34.940393 env[1211]: time="2024-02-12T19:24:34.940365267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\" returns image reference \"sha256:4b71e7439e0eba34a97844591560a009f37e8e6c17a386a34d416c1cc872dee8\"" Feb 12 19:24:34.942287 env[1211]: time="2024-02-12T19:24:34.942254409Z" level=info msg="CreateContainer within sandbox \"3d3e0ef7c3567ae8d17be4ac07d953095bcbbd405304736f8d36a7db70cf0859\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 12 19:24:34.960667 env[1211]: time="2024-02-12T19:24:34.960616208Z" level=info msg="CreateContainer within sandbox \"3d3e0ef7c3567ae8d17be4ac07d953095bcbbd405304736f8d36a7db70cf0859\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9571c897ad3703827d65d047017685db3665154408c99b7058ba572eeadd721a\"" Feb 12 19:24:34.961335 env[1211]: time="2024-02-12T19:24:34.961303310Z" level=info msg="StartContainer for \"9571c897ad3703827d65d047017685db3665154408c99b7058ba572eeadd721a\"" Feb 12 19:24:35.015795 env[1211]: time="2024-02-12T19:24:35.015746819Z" level=info msg="StartContainer for \"9571c897ad3703827d65d047017685db3665154408c99b7058ba572eeadd721a\" returns successfully" Feb 12 19:24:35.017033 env[1211]: time="2024-02-12T19:24:35.017007980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\"" Feb 12 19:24:35.810786 kubelet[1553]: E0212 19:24:35.810744 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:36.369779 env[1211]: time="2024-02-12T19:24:36.369728603Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:36.375222 env[1211]: time="2024-02-12T19:24:36.375175375Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9dbda087e98c46610fb8629cf530f1fe49eee4b17d2afe455664ca446ec39d43,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:36.376677 env[1211]: time="2024-02-12T19:24:36.376642894Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:36.377694 env[1211]: time="2024-02-12T19:24:36.377651221Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:45a7aba6020a7cf7b866cb8a8d481b30c97e9b3407e1459aaa65a5b4cc06633a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:36.378081 env[1211]: time="2024-02-12T19:24:36.378039435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\" returns image reference \"sha256:9dbda087e98c46610fb8629cf530f1fe49eee4b17d2afe455664ca446ec39d43\"" Feb 12 19:24:36.379925 env[1211]: time="2024-02-12T19:24:36.379894329Z" level=info msg="CreateContainer within sandbox \"3d3e0ef7c3567ae8d17be4ac07d953095bcbbd405304736f8d36a7db70cf0859\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 12 19:24:36.401317 env[1211]: time="2024-02-12T19:24:36.401245470Z" level=info msg="CreateContainer within sandbox \"3d3e0ef7c3567ae8d17be4ac07d953095bcbbd405304736f8d36a7db70cf0859\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1d3da9280b6d0792f0e22bad9e91ad22f74ad0ce2dd4c859bcf2e238430a4406\"" Feb 12 19:24:36.402208 env[1211]: time="2024-02-12T19:24:36.402168295Z" level=info msg="StartContainer for \"1d3da9280b6d0792f0e22bad9e91ad22f74ad0ce2dd4c859bcf2e238430a4406\"" Feb 12 19:24:36.487518 env[1211]: time="2024-02-12T19:24:36.487042768Z" level=info msg="StartContainer for \"1d3da9280b6d0792f0e22bad9e91ad22f74ad0ce2dd4c859bcf2e238430a4406\" returns successfully" Feb 12 19:24:36.557364 kubelet[1553]: I0212 19:24:36.556102 1553 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:24:36.611000 audit[2901]: NETFILTER_CFG table=filter:85 family=2 entries=7 op=nft_register_rule pid=2901 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:24:36.611000 audit[2901]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=ffffd62b8080 a2=0 a3=ffff85e066c0 items=0 ppid=1866 pid=2901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:36.614975 kubelet[1553]: I0212 19:24:36.614887 1553 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/99e0da3c-7214-4345-bffb-4bce9f7e4b46-calico-apiserver-certs\") pod \"calico-apiserver-f9f7fb78c-hmprn\" (UID: \"99e0da3c-7214-4345-bffb-4bce9f7e4b46\") " pod="calico-apiserver/calico-apiserver-f9f7fb78c-hmprn" Feb 12 19:24:36.614975 kubelet[1553]: I0212 19:24:36.614939 1553 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbwhl\" (UniqueName: \"kubernetes.io/projected/99e0da3c-7214-4345-bffb-4bce9f7e4b46-kube-api-access-tbwhl\") pod \"calico-apiserver-f9f7fb78c-hmprn\" (UID: \"99e0da3c-7214-4345-bffb-4bce9f7e4b46\") " pod="calico-apiserver/calico-apiserver-f9f7fb78c-hmprn" Feb 12 19:24:36.618767 kernel: audit: type=1325 audit(1707765876.611:262): table=filter:85 family=2 entries=7 op=nft_register_rule pid=2901 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:24:36.618893 kernel: audit: type=1300 audit(1707765876.611:262): arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=ffffd62b8080 a2=0 a3=ffff85e066c0 items=0 ppid=1866 pid=2901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:36.618929 kernel: audit: type=1327 audit(1707765876.611:262): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:24:36.611000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:24:36.633000 audit[2901]: NETFILTER_CFG table=nat:86 family=2 entries=78 op=nft_register_rule pid=2901 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:24:36.633000 audit[2901]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffd62b8080 a2=0 a3=ffff85e066c0 items=0 ppid=1866 pid=2901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:36.641044 kernel: audit: type=1325 audit(1707765876.633:263): table=nat:86 family=2 entries=78 op=nft_register_rule pid=2901 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:24:36.641111 kernel: audit: type=1300 audit(1707765876.633:263): arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffd62b8080 a2=0 a3=ffff85e066c0 items=0 ppid=1866 pid=2901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:36.641133 kernel: audit: type=1327 audit(1707765876.633:263): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:24:36.633000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:24:36.683000 audit[2930]: NETFILTER_CFG table=filter:87 family=2 entries=8 op=nft_register_rule pid=2930 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:24:36.683000 audit[2930]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=ffffe8a6bd20 a2=0 a3=ffff7fd0e6c0 items=0 ppid=1866 pid=2930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:36.688866 kernel: audit: type=1325 audit(1707765876.683:264): table=filter:87 family=2 entries=8 op=nft_register_rule pid=2930 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:24:36.688930 kernel: audit: type=1300 audit(1707765876.683:264): arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=ffffe8a6bd20 a2=0 a3=ffff7fd0e6c0 items=0 ppid=1866 pid=2930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:36.688951 kernel: audit: type=1327 audit(1707765876.683:264): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:24:36.683000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:24:36.684000 audit[2930]: NETFILTER_CFG table=nat:88 family=2 entries=78 op=nft_register_rule pid=2930 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:24:36.684000 audit[2930]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffe8a6bd20 a2=0 a3=ffff7fd0e6c0 items=0 ppid=1866 pid=2930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:36.684000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:24:36.697317 kernel: audit: type=1325 audit(1707765876.684:265): table=nat:88 family=2 entries=78 op=nft_register_rule pid=2930 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:24:36.716287 kubelet[1553]: E0212 19:24:36.716249 1553 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Feb 12 19:24:36.716464 kubelet[1553]: E0212 19:24:36.716434 1553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/99e0da3c-7214-4345-bffb-4bce9f7e4b46-calico-apiserver-certs podName:99e0da3c-7214-4345-bffb-4bce9f7e4b46 nodeName:}" failed. No retries permitted until 2024-02-12 19:24:37.216410811 +0000 UTC m=+51.460838510 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/99e0da3c-7214-4345-bffb-4bce9f7e4b46-calico-apiserver-certs") pod "calico-apiserver-f9f7fb78c-hmprn" (UID: "99e0da3c-7214-4345-bffb-4bce9f7e4b46") : secret "calico-apiserver-certs" not found Feb 12 19:24:36.811928 kubelet[1553]: E0212 19:24:36.811879 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:36.863806 kubelet[1553]: I0212 19:24:36.863547 1553 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 12 19:24:36.863806 kubelet[1553]: I0212 19:24:36.863576 1553 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 12 19:24:37.053754 kubelet[1553]: I0212 19:24:37.053719 1553 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-rklhx" podStartSLOduration=-9.223371998801094e+09 pod.CreationTimestamp="2024-02-12 19:23:59 +0000 UTC" firstStartedPulling="2024-02-12 19:24:30.460927961 +0000 UTC m=+44.705355660" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:24:37.052368426 +0000 UTC m=+51.296796125" watchObservedRunningTime="2024-02-12 19:24:37.053682295 +0000 UTC m=+51.298109954" Feb 12 19:24:37.460827 env[1211]: time="2024-02-12T19:24:37.460726826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f9f7fb78c-hmprn,Uid:99e0da3c-7214-4345-bffb-4bce9f7e4b46,Namespace:calico-apiserver,Attempt:0,}" Feb 12 19:24:37.688672 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:24:37.688812 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali5ec25b40d6b: link becomes ready Feb 12 19:24:37.687623 systemd-networkd[1096]: cali5ec25b40d6b: Link UP Feb 12 19:24:37.688740 systemd-networkd[1096]: cali5ec25b40d6b: Gained carrier Feb 12 19:24:37.699483 env[1211]: 2024-02-12 19:24:37.613 [INFO][2934] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.89-k8s-calico--apiserver--f9f7fb78c--hmprn-eth0 calico-apiserver-f9f7fb78c- calico-apiserver 99e0da3c-7214-4345-bffb-4bce9f7e4b46 1036 0 2024-02-12 19:24:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f9f7fb78c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 10.0.0.89 calico-apiserver-f9f7fb78c-hmprn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5ec25b40d6b [] []}} ContainerID="4a443a5636f32bc28958b86e0ccd707732642c4e0526701ddf40b6ea55a57371" Namespace="calico-apiserver" Pod="calico-apiserver-f9f7fb78c-hmprn" WorkloadEndpoint="10.0.0.89-k8s-calico--apiserver--f9f7fb78c--hmprn-" Feb 12 19:24:37.699483 env[1211]: 2024-02-12 19:24:37.614 [INFO][2934] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="4a443a5636f32bc28958b86e0ccd707732642c4e0526701ddf40b6ea55a57371" Namespace="calico-apiserver" Pod="calico-apiserver-f9f7fb78c-hmprn" WorkloadEndpoint="10.0.0.89-k8s-calico--apiserver--f9f7fb78c--hmprn-eth0" Feb 12 19:24:37.699483 env[1211]: 2024-02-12 19:24:37.642 [INFO][2948] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4a443a5636f32bc28958b86e0ccd707732642c4e0526701ddf40b6ea55a57371" HandleID="k8s-pod-network.4a443a5636f32bc28958b86e0ccd707732642c4e0526701ddf40b6ea55a57371" Workload="10.0.0.89-k8s-calico--apiserver--f9f7fb78c--hmprn-eth0" Feb 12 19:24:37.699483 env[1211]: 2024-02-12 19:24:37.654 [INFO][2948] ipam_plugin.go 268: Auto assigning IP ContainerID="4a443a5636f32bc28958b86e0ccd707732642c4e0526701ddf40b6ea55a57371" HandleID="k8s-pod-network.4a443a5636f32bc28958b86e0ccd707732642c4e0526701ddf40b6ea55a57371" Workload="10.0.0.89-k8s-calico--apiserver--f9f7fb78c--hmprn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400029dc50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"10.0.0.89", "pod":"calico-apiserver-f9f7fb78c-hmprn", "timestamp":"2024-02-12 19:24:37.642142999 +0000 UTC"}, Hostname:"10.0.0.89", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 12 19:24:37.699483 env[1211]: 2024-02-12 19:24:37.654 [INFO][2948] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:24:37.699483 env[1211]: 2024-02-12 19:24:37.654 [INFO][2948] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:24:37.699483 env[1211]: 2024-02-12 19:24:37.654 [INFO][2948] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.89' Feb 12 19:24:37.699483 env[1211]: 2024-02-12 19:24:37.656 [INFO][2948] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4a443a5636f32bc28958b86e0ccd707732642c4e0526701ddf40b6ea55a57371" host="10.0.0.89" Feb 12 19:24:37.699483 env[1211]: 2024-02-12 19:24:37.662 [INFO][2948] ipam.go 372: Looking up existing affinities for host host="10.0.0.89" Feb 12 19:24:37.699483 env[1211]: 2024-02-12 19:24:37.667 [INFO][2948] ipam.go 489: Trying affinity for 192.168.98.0/26 host="10.0.0.89" Feb 12 19:24:37.699483 env[1211]: 2024-02-12 19:24:37.669 [INFO][2948] ipam.go 155: Attempting to load block cidr=192.168.98.0/26 host="10.0.0.89" Feb 12 19:24:37.699483 env[1211]: 2024-02-12 19:24:37.672 [INFO][2948] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.98.0/26 host="10.0.0.89" Feb 12 19:24:37.699483 env[1211]: 2024-02-12 19:24:37.672 [INFO][2948] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.98.0/26 handle="k8s-pod-network.4a443a5636f32bc28958b86e0ccd707732642c4e0526701ddf40b6ea55a57371" host="10.0.0.89" Feb 12 19:24:37.699483 env[1211]: 2024-02-12 19:24:37.673 [INFO][2948] ipam.go 1682: Creating new handle: k8s-pod-network.4a443a5636f32bc28958b86e0ccd707732642c4e0526701ddf40b6ea55a57371 Feb 12 19:24:37.699483 env[1211]: 2024-02-12 19:24:37.678 [INFO][2948] ipam.go 1203: Writing block in order to claim IPs block=192.168.98.0/26 handle="k8s-pod-network.4a443a5636f32bc28958b86e0ccd707732642c4e0526701ddf40b6ea55a57371" host="10.0.0.89" Feb 12 19:24:37.699483 env[1211]: 2024-02-12 19:24:37.684 [INFO][2948] ipam.go 1216: Successfully claimed IPs: [192.168.98.3/26] block=192.168.98.0/26 handle="k8s-pod-network.4a443a5636f32bc28958b86e0ccd707732642c4e0526701ddf40b6ea55a57371" host="10.0.0.89" Feb 12 19:24:37.699483 env[1211]: 2024-02-12 19:24:37.684 [INFO][2948] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.98.3/26] handle="k8s-pod-network.4a443a5636f32bc28958b86e0ccd707732642c4e0526701ddf40b6ea55a57371" host="10.0.0.89" Feb 12 19:24:37.699483 env[1211]: 2024-02-12 19:24:37.684 [INFO][2948] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:24:37.699483 env[1211]: 2024-02-12 19:24:37.684 [INFO][2948] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.98.3/26] IPv6=[] ContainerID="4a443a5636f32bc28958b86e0ccd707732642c4e0526701ddf40b6ea55a57371" HandleID="k8s-pod-network.4a443a5636f32bc28958b86e0ccd707732642c4e0526701ddf40b6ea55a57371" Workload="10.0.0.89-k8s-calico--apiserver--f9f7fb78c--hmprn-eth0" Feb 12 19:24:37.701189 env[1211]: 2024-02-12 19:24:37.686 [INFO][2934] k8s.go 385: Populated endpoint ContainerID="4a443a5636f32bc28958b86e0ccd707732642c4e0526701ddf40b6ea55a57371" Namespace="calico-apiserver" Pod="calico-apiserver-f9f7fb78c-hmprn" WorkloadEndpoint="10.0.0.89-k8s-calico--apiserver--f9f7fb78c--hmprn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89-k8s-calico--apiserver--f9f7fb78c--hmprn-eth0", GenerateName:"calico-apiserver-f9f7fb78c-", Namespace:"calico-apiserver", SelfLink:"", UID:"99e0da3c-7214-4345-bffb-4bce9f7e4b46", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 24, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f9f7fb78c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.89", ContainerID:"", Pod:"calico-apiserver-f9f7fb78c-hmprn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.98.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5ec25b40d6b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:24:37.701189 env[1211]: 2024-02-12 19:24:37.686 [INFO][2934] k8s.go 386: Calico CNI using IPs: [192.168.98.3/32] ContainerID="4a443a5636f32bc28958b86e0ccd707732642c4e0526701ddf40b6ea55a57371" Namespace="calico-apiserver" Pod="calico-apiserver-f9f7fb78c-hmprn" WorkloadEndpoint="10.0.0.89-k8s-calico--apiserver--f9f7fb78c--hmprn-eth0" Feb 12 19:24:37.701189 env[1211]: 2024-02-12 19:24:37.686 [INFO][2934] dataplane_linux.go 68: Setting the host side veth name to cali5ec25b40d6b ContainerID="4a443a5636f32bc28958b86e0ccd707732642c4e0526701ddf40b6ea55a57371" Namespace="calico-apiserver" Pod="calico-apiserver-f9f7fb78c-hmprn" WorkloadEndpoint="10.0.0.89-k8s-calico--apiserver--f9f7fb78c--hmprn-eth0" Feb 12 19:24:37.701189 env[1211]: 2024-02-12 19:24:37.688 [INFO][2934] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="4a443a5636f32bc28958b86e0ccd707732642c4e0526701ddf40b6ea55a57371" Namespace="calico-apiserver" Pod="calico-apiserver-f9f7fb78c-hmprn" WorkloadEndpoint="10.0.0.89-k8s-calico--apiserver--f9f7fb78c--hmprn-eth0" Feb 12 19:24:37.701189 env[1211]: 2024-02-12 19:24:37.690 [INFO][2934] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="4a443a5636f32bc28958b86e0ccd707732642c4e0526701ddf40b6ea55a57371" Namespace="calico-apiserver" Pod="calico-apiserver-f9f7fb78c-hmprn" WorkloadEndpoint="10.0.0.89-k8s-calico--apiserver--f9f7fb78c--hmprn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89-k8s-calico--apiserver--f9f7fb78c--hmprn-eth0", GenerateName:"calico-apiserver-f9f7fb78c-", Namespace:"calico-apiserver", SelfLink:"", UID:"99e0da3c-7214-4345-bffb-4bce9f7e4b46", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 24, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f9f7fb78c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.89", ContainerID:"4a443a5636f32bc28958b86e0ccd707732642c4e0526701ddf40b6ea55a57371", Pod:"calico-apiserver-f9f7fb78c-hmprn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.98.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5ec25b40d6b", MAC:"da:00:a7:56:51:8a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:24:37.701189 env[1211]: 2024-02-12 19:24:37.698 [INFO][2934] k8s.go 491: Wrote updated endpoint to datastore ContainerID="4a443a5636f32bc28958b86e0ccd707732642c4e0526701ddf40b6ea55a57371" Namespace="calico-apiserver" Pod="calico-apiserver-f9f7fb78c-hmprn" WorkloadEndpoint="10.0.0.89-k8s-calico--apiserver--f9f7fb78c--hmprn-eth0" Feb 12 19:24:37.718000 audit[2971]: NETFILTER_CFG table=filter:89 family=2 entries=51 op=nft_register_chain pid=2971 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:24:37.718000 audit[2971]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=26916 a0=3 a1=ffffe921d360 a2=0 a3=ffff8b6a7fa8 items=0 ppid=2264 pid=2971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:37.718000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:24:37.726497 env[1211]: time="2024-02-12T19:24:37.726284323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:24:37.726497 env[1211]: time="2024-02-12T19:24:37.726344137Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:24:37.726497 env[1211]: time="2024-02-12T19:24:37.726361341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:24:37.726805 env[1211]: time="2024-02-12T19:24:37.726743751Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4a443a5636f32bc28958b86e0ccd707732642c4e0526701ddf40b6ea55a57371 pid=2979 runtime=io.containerd.runc.v2 Feb 12 19:24:37.786417 systemd-resolved[1154]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 19:24:37.804092 env[1211]: time="2024-02-12T19:24:37.803801331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f9f7fb78c-hmprn,Uid:99e0da3c-7214-4345-bffb-4bce9f7e4b46,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4a443a5636f32bc28958b86e0ccd707732642c4e0526701ddf40b6ea55a57371\"" Feb 12 19:24:37.805650 env[1211]: time="2024-02-12T19:24:37.805618718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\"" Feb 12 19:24:37.812543 kubelet[1553]: E0212 19:24:37.812482 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:38.813253 kubelet[1553]: E0212 19:24:38.813208 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:38.927518 systemd-networkd[1096]: cali5ec25b40d6b: Gained IPv6LL Feb 12 19:24:39.256000 audit[3038]: NETFILTER_CFG table=filter:90 family=2 entries=20 op=nft_register_rule pid=3038 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:24:39.256000 audit[3038]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11068 a0=3 a1=ffffca118f60 a2=0 a3=ffff82d296c0 items=0 ppid=1866 pid=3038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:39.256000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:24:39.257000 audit[3038]: NETFILTER_CFG table=nat:91 family=2 entries=78 op=nft_register_rule pid=3038 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:24:39.257000 audit[3038]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffca118f60 a2=0 a3=ffff82d296c0 items=0 ppid=1866 pid=3038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:39.257000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:24:39.279813 kubelet[1553]: I0212 19:24:39.278996 1553 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:24:39.304000 audit[3064]: NETFILTER_CFG table=filter:92 family=2 entries=32 op=nft_register_rule pid=3064 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:24:39.304000 audit[3064]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11068 a0=3 a1=ffffd5c285f0 a2=0 a3=ffffa7ac26c0 items=0 ppid=1866 pid=3064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:39.304000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:24:39.306000 audit[3064]: NETFILTER_CFG table=nat:93 family=2 entries=78 op=nft_register_rule pid=3064 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:24:39.306000 audit[3064]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffd5c285f0 a2=0 a3=ffffa7ac26c0 items=0 ppid=1866 pid=3064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:39.306000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:24:39.328999 kubelet[1553]: I0212 19:24:39.328955 1553 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/1dfe7d07-0454-4690-835d-ee6300216233-data\") pod \"nfs-server-provisioner-0\" (UID: \"1dfe7d07-0454-4690-835d-ee6300216233\") " pod="default/nfs-server-provisioner-0" Feb 12 19:24:39.328999 kubelet[1553]: I0212 19:24:39.329008 1553 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9tn7\" (UniqueName: \"kubernetes.io/projected/1dfe7d07-0454-4690-835d-ee6300216233-kube-api-access-j9tn7\") pod \"nfs-server-provisioner-0\" (UID: \"1dfe7d07-0454-4690-835d-ee6300216233\") " pod="default/nfs-server-provisioner-0" Feb 12 19:24:39.585337 env[1211]: time="2024-02-12T19:24:39.585075213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:1dfe7d07-0454-4690-835d-ee6300216233,Namespace:default,Attempt:0,}" Feb 12 19:24:39.814057 kubelet[1553]: E0212 19:24:39.813953 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:39.862969 systemd-networkd[1096]: cali60e51b789ff: Link UP Feb 12 19:24:39.863351 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:24:39.863401 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali60e51b789ff: link becomes ready Feb 12 19:24:39.863638 systemd-networkd[1096]: cali60e51b789ff: Gained carrier Feb 12 19:24:39.879865 env[1211]: 2024-02-12 19:24:39.761 [INFO][3068] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.89-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 1dfe7d07-0454-4690-835d-ee6300216233 1078 0 2024-02-12 19:24:39 +0000 UTC map[app:nfs-server-provisioner chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.89 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="b291a7efebed8d48fce8cda2d85eaad95ac474a06561544bcc88e84b22a5b6e9" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.89-k8s-nfs--server--provisioner--0-" Feb 12 19:24:39.879865 env[1211]: 2024-02-12 19:24:39.761 [INFO][3068] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="b291a7efebed8d48fce8cda2d85eaad95ac474a06561544bcc88e84b22a5b6e9" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.89-k8s-nfs--server--provisioner--0-eth0" Feb 12 19:24:39.879865 env[1211]: 2024-02-12 19:24:39.800 [INFO][3083] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b291a7efebed8d48fce8cda2d85eaad95ac474a06561544bcc88e84b22a5b6e9" HandleID="k8s-pod-network.b291a7efebed8d48fce8cda2d85eaad95ac474a06561544bcc88e84b22a5b6e9" Workload="10.0.0.89-k8s-nfs--server--provisioner--0-eth0" Feb 12 19:24:39.879865 env[1211]: 2024-02-12 19:24:39.815 [INFO][3083] ipam_plugin.go 268: Auto assigning IP ContainerID="b291a7efebed8d48fce8cda2d85eaad95ac474a06561544bcc88e84b22a5b6e9" HandleID="k8s-pod-network.b291a7efebed8d48fce8cda2d85eaad95ac474a06561544bcc88e84b22a5b6e9" Workload="10.0.0.89-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c380), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.89", "pod":"nfs-server-provisioner-0", "timestamp":"2024-02-12 19:24:39.800438797 +0000 UTC"}, Hostname:"10.0.0.89", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 12 19:24:39.879865 env[1211]: 2024-02-12 19:24:39.815 [INFO][3083] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:24:39.879865 env[1211]: 2024-02-12 19:24:39.815 [INFO][3083] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:24:39.879865 env[1211]: 2024-02-12 19:24:39.815 [INFO][3083] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.89' Feb 12 19:24:39.879865 env[1211]: 2024-02-12 19:24:39.817 [INFO][3083] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b291a7efebed8d48fce8cda2d85eaad95ac474a06561544bcc88e84b22a5b6e9" host="10.0.0.89" Feb 12 19:24:39.879865 env[1211]: 2024-02-12 19:24:39.823 [INFO][3083] ipam.go 372: Looking up existing affinities for host host="10.0.0.89" Feb 12 19:24:39.879865 env[1211]: 2024-02-12 19:24:39.833 [INFO][3083] ipam.go 489: Trying affinity for 192.168.98.0/26 host="10.0.0.89" Feb 12 19:24:39.879865 env[1211]: 2024-02-12 19:24:39.835 [INFO][3083] ipam.go 155: Attempting to load block cidr=192.168.98.0/26 host="10.0.0.89" Feb 12 19:24:39.879865 env[1211]: 2024-02-12 19:24:39.837 [INFO][3083] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.98.0/26 host="10.0.0.89" Feb 12 19:24:39.879865 env[1211]: 2024-02-12 19:24:39.837 [INFO][3083] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.98.0/26 handle="k8s-pod-network.b291a7efebed8d48fce8cda2d85eaad95ac474a06561544bcc88e84b22a5b6e9" host="10.0.0.89" Feb 12 19:24:39.879865 env[1211]: 2024-02-12 19:24:39.840 [INFO][3083] ipam.go 1682: Creating new handle: k8s-pod-network.b291a7efebed8d48fce8cda2d85eaad95ac474a06561544bcc88e84b22a5b6e9 Feb 12 19:24:39.879865 env[1211]: 2024-02-12 19:24:39.844 [INFO][3083] ipam.go 1203: Writing block in order to claim IPs block=192.168.98.0/26 handle="k8s-pod-network.b291a7efebed8d48fce8cda2d85eaad95ac474a06561544bcc88e84b22a5b6e9" host="10.0.0.89" Feb 12 19:24:39.879865 env[1211]: 2024-02-12 19:24:39.858 [INFO][3083] ipam.go 1216: Successfully claimed IPs: [192.168.98.4/26] block=192.168.98.0/26 handle="k8s-pod-network.b291a7efebed8d48fce8cda2d85eaad95ac474a06561544bcc88e84b22a5b6e9" host="10.0.0.89" Feb 12 19:24:39.879865 env[1211]: 2024-02-12 19:24:39.858 [INFO][3083] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.98.4/26] handle="k8s-pod-network.b291a7efebed8d48fce8cda2d85eaad95ac474a06561544bcc88e84b22a5b6e9" host="10.0.0.89" Feb 12 19:24:39.879865 env[1211]: 2024-02-12 19:24:39.858 [INFO][3083] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:24:39.879865 env[1211]: 2024-02-12 19:24:39.858 [INFO][3083] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.98.4/26] IPv6=[] ContainerID="b291a7efebed8d48fce8cda2d85eaad95ac474a06561544bcc88e84b22a5b6e9" HandleID="k8s-pod-network.b291a7efebed8d48fce8cda2d85eaad95ac474a06561544bcc88e84b22a5b6e9" Workload="10.0.0.89-k8s-nfs--server--provisioner--0-eth0" Feb 12 19:24:39.881348 env[1211]: 2024-02-12 19:24:39.859 [INFO][3068] k8s.go 385: Populated endpoint ContainerID="b291a7efebed8d48fce8cda2d85eaad95ac474a06561544bcc88e84b22a5b6e9" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.89-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"1dfe7d07-0454-4690-835d-ee6300216233", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.89", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.98.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:24:39.881348 env[1211]: 2024-02-12 19:24:39.860 [INFO][3068] k8s.go 386: Calico CNI using IPs: [192.168.98.4/32] ContainerID="b291a7efebed8d48fce8cda2d85eaad95ac474a06561544bcc88e84b22a5b6e9" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.89-k8s-nfs--server--provisioner--0-eth0" Feb 12 19:24:39.881348 env[1211]: 2024-02-12 19:24:39.860 [INFO][3068] dataplane_linux.go 68: Setting the host side veth name to cali60e51b789ff ContainerID="b291a7efebed8d48fce8cda2d85eaad95ac474a06561544bcc88e84b22a5b6e9" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.89-k8s-nfs--server--provisioner--0-eth0" Feb 12 19:24:39.881348 env[1211]: 2024-02-12 19:24:39.864 [INFO][3068] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="b291a7efebed8d48fce8cda2d85eaad95ac474a06561544bcc88e84b22a5b6e9" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.89-k8s-nfs--server--provisioner--0-eth0" Feb 12 19:24:39.881555 env[1211]: 2024-02-12 19:24:39.864 [INFO][3068] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="b291a7efebed8d48fce8cda2d85eaad95ac474a06561544bcc88e84b22a5b6e9" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.89-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"1dfe7d07-0454-4690-835d-ee6300216233", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.89", ContainerID:"b291a7efebed8d48fce8cda2d85eaad95ac474a06561544bcc88e84b22a5b6e9", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.98.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"76:81:9d:c2:71:ce", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:24:39.881555 env[1211]: 2024-02-12 19:24:39.873 [INFO][3068] k8s.go 491: Wrote updated endpoint to datastore ContainerID="b291a7efebed8d48fce8cda2d85eaad95ac474a06561544bcc88e84b22a5b6e9" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.89-k8s-nfs--server--provisioner--0-eth0" Feb 12 19:24:39.892000 audit[3109]: NETFILTER_CFG table=filter:94 family=2 entries=42 op=nft_register_chain pid=3109 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:24:39.892000 audit[3109]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20688 a0=3 a1=ffffc3e62880 a2=0 a3=ffff8f422fa8 items=0 ppid=2264 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:39.892000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:24:39.913970 env[1211]: time="2024-02-12T19:24:39.913868299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:24:39.913970 env[1211]: time="2024-02-12T19:24:39.913923631Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:24:39.913970 env[1211]: time="2024-02-12T19:24:39.913935393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:24:39.914513 env[1211]: time="2024-02-12T19:24:39.914436062Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b291a7efebed8d48fce8cda2d85eaad95ac474a06561544bcc88e84b22a5b6e9 pid=3118 runtime=io.containerd.runc.v2 Feb 12 19:24:39.972310 systemd-resolved[1154]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 19:24:39.993887 env[1211]: time="2024-02-12T19:24:39.993838326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:1dfe7d07-0454-4690-835d-ee6300216233,Namespace:default,Attempt:0,} returns sandbox id \"b291a7efebed8d48fce8cda2d85eaad95ac474a06561544bcc88e84b22a5b6e9\"" Feb 12 19:24:40.569205 env[1211]: time="2024-02-12T19:24:40.569152908Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:40.570625 env[1211]: time="2024-02-12T19:24:40.570584248Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:24494ef6c7de0e2dcf21ad9fb6c94801c53f120443e256a5e1b54eccd57058a9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:40.573022 env[1211]: time="2024-02-12T19:24:40.572974829Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:40.574174 env[1211]: time="2024-02-12T19:24:40.574134312Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:5ff0bdc8d0b2e9d7819703b18867f60f9153ed01da81e2bbfa22002abec9dc26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:40.575520 env[1211]: time="2024-02-12T19:24:40.575480554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\" returns image reference \"sha256:24494ef6c7de0e2dcf21ad9fb6c94801c53f120443e256a5e1b54eccd57058a9\"" Feb 12 19:24:40.576087 env[1211]: time="2024-02-12T19:24:40.576058395Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 12 19:24:40.580057 env[1211]: time="2024-02-12T19:24:40.580004221Z" level=info msg="CreateContainer within sandbox \"4a443a5636f32bc28958b86e0ccd707732642c4e0526701ddf40b6ea55a57371\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 12 19:24:40.602000 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2047554204.mount: Deactivated successfully. Feb 12 19:24:40.607375 env[1211]: time="2024-02-12T19:24:40.607323585Z" level=info msg="CreateContainer within sandbox \"4a443a5636f32bc28958b86e0ccd707732642c4e0526701ddf40b6ea55a57371\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3cec0c1c0ae3337b1264289afca1209d87af1627045ef75e61b23d3ca90d45ed\"" Feb 12 19:24:40.608052 env[1211]: time="2024-02-12T19:24:40.608016050Z" level=info msg="StartContainer for \"3cec0c1c0ae3337b1264289afca1209d87af1627045ef75e61b23d3ca90d45ed\"" Feb 12 19:24:40.703570 env[1211]: time="2024-02-12T19:24:40.703499493Z" level=info msg="StartContainer for \"3cec0c1c0ae3337b1264289afca1209d87af1627045ef75e61b23d3ca90d45ed\" returns successfully" Feb 12 19:24:40.814303 kubelet[1553]: E0212 19:24:40.814244 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:41.061408 kubelet[1553]: I0212 19:24:41.061349 1553 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f9f7fb78c-hmprn" podStartSLOduration=-9.223372031793491e+09 pod.CreationTimestamp="2024-02-12 19:24:36 +0000 UTC" firstStartedPulling="2024-02-12 19:24:37.804941159 +0000 UTC m=+52.049368858" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:24:41.060783779 +0000 UTC m=+55.305211478" watchObservedRunningTime="2024-02-12 19:24:41.06128388 +0000 UTC m=+55.305711579" Feb 12 19:24:41.108000 audit[3211]: NETFILTER_CFG table=filter:95 family=2 entries=32 op=nft_register_rule pid=3211 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:24:41.108000 audit[3211]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11068 a0=3 a1=fffffb197b40 a2=0 a3=ffffabe816c0 items=0 ppid=1866 pid=3211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:41.108000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:24:41.110000 audit[3211]: NETFILTER_CFG table=nat:96 family=2 entries=78 op=nft_register_rule pid=3211 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:24:41.110000 audit[3211]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=fffffb197b40 a2=0 a3=ffffabe816c0 items=0 ppid=1866 pid=3211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:41.110000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:24:41.359448 systemd-networkd[1096]: cali60e51b789ff: Gained IPv6LL Feb 12 19:24:41.404000 audit[3243]: NETFILTER_CFG table=filter:97 family=2 entries=32 op=nft_register_rule pid=3243 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:24:41.404000 audit[3243]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11068 a0=3 a1=ffffe7d79c40 a2=0 a3=ffffb47976c0 items=0 ppid=1866 pid=3243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:41.404000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:24:41.406000 audit[3243]: NETFILTER_CFG table=nat:98 family=2 entries=78 op=nft_register_rule pid=3243 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:24:41.406000 audit[3243]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffe7d79c40 a2=0 a3=ffffb47976c0 items=0 ppid=1866 pid=3243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:41.406000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:24:41.814684 kubelet[1553]: E0212 19:24:41.814648 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:42.666937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount865941538.mount: Deactivated successfully. Feb 12 19:24:42.816016 kubelet[1553]: E0212 19:24:42.815967 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:43.816612 kubelet[1553]: E0212 19:24:43.816559 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:44.365526 env[1211]: time="2024-02-12T19:24:44.365470188Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:44.393935 env[1211]: time="2024-02-12T19:24:44.393879331Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:44.458409 env[1211]: time="2024-02-12T19:24:44.458362975Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:44.587792 env[1211]: time="2024-02-12T19:24:44.587735298Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:44.588576 env[1211]: time="2024-02-12T19:24:44.588542205Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Feb 12 19:24:44.590550 env[1211]: time="2024-02-12T19:24:44.590517885Z" level=info msg="CreateContainer within sandbox \"b291a7efebed8d48fce8cda2d85eaad95ac474a06561544bcc88e84b22a5b6e9\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 12 19:24:44.612323 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3766940331.mount: Deactivated successfully. Feb 12 19:24:44.621867 env[1211]: time="2024-02-12T19:24:44.621534264Z" level=info msg="CreateContainer within sandbox \"b291a7efebed8d48fce8cda2d85eaad95ac474a06561544bcc88e84b22a5b6e9\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"1d2c801815ca99db307c3f157a08ea46f6ec4f1bd8f4ca790d0c7ec2e52dc0c7\"" Feb 12 19:24:44.622105 env[1211]: time="2024-02-12T19:24:44.622067201Z" level=info msg="StartContainer for \"1d2c801815ca99db307c3f157a08ea46f6ec4f1bd8f4ca790d0c7ec2e52dc0c7\"" Feb 12 19:24:44.697331 env[1211]: time="2024-02-12T19:24:44.694179317Z" level=info msg="StartContainer for \"1d2c801815ca99db307c3f157a08ea46f6ec4f1bd8f4ca790d0c7ec2e52dc0c7\" returns successfully" Feb 12 19:24:44.816909 kubelet[1553]: E0212 19:24:44.816872 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:45.074833 kubelet[1553]: I0212 19:24:45.074727 1553 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=-9.223372030780088e+09 pod.CreationTimestamp="2024-02-12 19:24:39 +0000 UTC" firstStartedPulling="2024-02-12 19:24:39.995371459 +0000 UTC m=+54.239799158" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:24:45.074552455 +0000 UTC m=+59.318980154" watchObservedRunningTime="2024-02-12 19:24:45.074687158 +0000 UTC m=+59.319114858" Feb 12 19:24:45.130466 kernel: kauditd_printk_skb: 32 callbacks suppressed Feb 12 19:24:45.130590 kernel: audit: type=1325 audit(1707765885.128:276): table=filter:99 family=2 entries=20 op=nft_register_rule pid=3334 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:24:45.128000 audit[3334]: NETFILTER_CFG table=filter:99 family=2 entries=20 op=nft_register_rule pid=3334 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:24:45.128000 audit[3334]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=ffffc4f0ec20 a2=0 a3=ffffb46b26c0 items=0 ppid=1866 pid=3334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:45.134453 kernel: audit: type=1300 audit(1707765885.128:276): arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=ffffc4f0ec20 a2=0 a3=ffffb46b26c0 items=0 ppid=1866 pid=3334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:45.128000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:24:45.135859 kernel: audit: type=1327 audit(1707765885.128:276): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:24:45.131000 audit[3334]: NETFILTER_CFG table=nat:100 family=2 entries=162 op=nft_register_chain pid=3334 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:24:45.137377 kernel: audit: type=1325 audit(1707765885.131:277): table=nat:100 family=2 entries=162 op=nft_register_chain pid=3334 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 19:24:45.137414 kernel: audit: type=1300 audit(1707765885.131:277): arch=c00000b7 syscall=211 success=yes exit=66940 a0=3 a1=ffffc4f0ec20 a2=0 a3=ffffb46b26c0 items=0 ppid=1866 pid=3334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:45.131000 audit[3334]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=66940 a0=3 a1=ffffc4f0ec20 a2=0 a3=ffffb46b26c0 items=0 ppid=1866 pid=3334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:45.140777 kernel: audit: type=1327 audit(1707765885.131:277): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:24:45.131000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 19:24:45.817789 kubelet[1553]: E0212 19:24:45.817742 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:46.776398 kubelet[1553]: E0212 19:24:46.776342 1553 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:46.783561 env[1211]: time="2024-02-12T19:24:46.783520261Z" level=info msg="StopPodSandbox for \"6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903\"" Feb 12 19:24:46.818210 kubelet[1553]: E0212 19:24:46.818127 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:46.856958 env[1211]: 2024-02-12 19:24:46.826 [WARNING][3351] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89-k8s-csi--node--driver--rklhx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e5010fa5-3c3f-473b-8a11-14f74264629a", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 23, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.89", ContainerID:"3d3e0ef7c3567ae8d17be4ac07d953095bcbbd405304736f8d36a7db70cf0859", Pod:"csi-node-driver-rklhx", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.98.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali423b827255e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:24:46.856958 env[1211]: 2024-02-12 19:24:46.826 [INFO][3351] k8s.go 578: Cleaning up netns ContainerID="6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903" Feb 12 19:24:46.856958 env[1211]: 2024-02-12 19:24:46.826 [INFO][3351] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903" iface="eth0" netns="" Feb 12 19:24:46.856958 env[1211]: 2024-02-12 19:24:46.827 [INFO][3351] k8s.go 585: Releasing IP address(es) ContainerID="6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903" Feb 12 19:24:46.856958 env[1211]: 2024-02-12 19:24:46.827 [INFO][3351] utils.go 188: Calico CNI releasing IP address ContainerID="6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903" Feb 12 19:24:46.856958 env[1211]: 2024-02-12 19:24:46.842 [INFO][3359] ipam_plugin.go 415: Releasing address using handleID ContainerID="6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903" HandleID="k8s-pod-network.6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903" Workload="10.0.0.89-k8s-csi--node--driver--rklhx-eth0" Feb 12 19:24:46.856958 env[1211]: 2024-02-12 19:24:46.842 [INFO][3359] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:24:46.856958 env[1211]: 2024-02-12 19:24:46.842 [INFO][3359] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:24:46.856958 env[1211]: 2024-02-12 19:24:46.853 [WARNING][3359] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903" HandleID="k8s-pod-network.6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903" Workload="10.0.0.89-k8s-csi--node--driver--rklhx-eth0" Feb 12 19:24:46.856958 env[1211]: 2024-02-12 19:24:46.853 [INFO][3359] ipam_plugin.go 443: Releasing address using workloadID ContainerID="6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903" HandleID="k8s-pod-network.6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903" Workload="10.0.0.89-k8s-csi--node--driver--rklhx-eth0" Feb 12 19:24:46.856958 env[1211]: 2024-02-12 19:24:46.854 [INFO][3359] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:24:46.856958 env[1211]: 2024-02-12 19:24:46.855 [INFO][3351] k8s.go 591: Teardown processing complete. ContainerID="6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903" Feb 12 19:24:46.857419 env[1211]: time="2024-02-12T19:24:46.856981086Z" level=info msg="TearDown network for sandbox \"6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903\" successfully" Feb 12 19:24:46.857419 env[1211]: time="2024-02-12T19:24:46.857008770Z" level=info msg="StopPodSandbox for \"6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903\" returns successfully" Feb 12 19:24:46.857469 env[1211]: time="2024-02-12T19:24:46.857426362Z" level=info msg="RemovePodSandbox for \"6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903\"" Feb 12 19:24:46.857493 env[1211]: time="2024-02-12T19:24:46.857457087Z" level=info msg="Forcibly stopping sandbox \"6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903\"" Feb 12 19:24:46.933032 env[1211]: 2024-02-12 19:24:46.898 [WARNING][3382] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89-k8s-csi--node--driver--rklhx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e5010fa5-3c3f-473b-8a11-14f74264629a", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 23, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.89", ContainerID:"3d3e0ef7c3567ae8d17be4ac07d953095bcbbd405304736f8d36a7db70cf0859", Pod:"csi-node-driver-rklhx", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.98.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali423b827255e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:24:46.933032 env[1211]: 2024-02-12 19:24:46.898 [INFO][3382] k8s.go 578: Cleaning up netns ContainerID="6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903" Feb 12 19:24:46.933032 env[1211]: 2024-02-12 19:24:46.898 [INFO][3382] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903" iface="eth0" netns="" Feb 12 19:24:46.933032 env[1211]: 2024-02-12 19:24:46.898 [INFO][3382] k8s.go 585: Releasing IP address(es) ContainerID="6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903" Feb 12 19:24:46.933032 env[1211]: 2024-02-12 19:24:46.898 [INFO][3382] utils.go 188: Calico CNI releasing IP address ContainerID="6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903" Feb 12 19:24:46.933032 env[1211]: 2024-02-12 19:24:46.916 [INFO][3390] ipam_plugin.go 415: Releasing address using handleID ContainerID="6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903" HandleID="k8s-pod-network.6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903" Workload="10.0.0.89-k8s-csi--node--driver--rklhx-eth0" Feb 12 19:24:46.933032 env[1211]: 2024-02-12 19:24:46.916 [INFO][3390] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:24:46.933032 env[1211]: 2024-02-12 19:24:46.916 [INFO][3390] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:24:46.933032 env[1211]: 2024-02-12 19:24:46.929 [WARNING][3390] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903" HandleID="k8s-pod-network.6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903" Workload="10.0.0.89-k8s-csi--node--driver--rklhx-eth0" Feb 12 19:24:46.933032 env[1211]: 2024-02-12 19:24:46.929 [INFO][3390] ipam_plugin.go 443: Releasing address using workloadID ContainerID="6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903" HandleID="k8s-pod-network.6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903" Workload="10.0.0.89-k8s-csi--node--driver--rklhx-eth0" Feb 12 19:24:46.933032 env[1211]: 2024-02-12 19:24:46.930 [INFO][3390] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:24:46.933032 env[1211]: 2024-02-12 19:24:46.932 [INFO][3382] k8s.go 591: Teardown processing complete. ContainerID="6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903" Feb 12 19:24:46.933549 env[1211]: time="2024-02-12T19:24:46.933515117Z" level=info msg="TearDown network for sandbox \"6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903\" successfully" Feb 12 19:24:46.936391 env[1211]: time="2024-02-12T19:24:46.936354003Z" level=info msg="RemovePodSandbox \"6908db5bbf1e5c5bbb191ab083c5fee3e4f48d19a253b9db1cbe2258478d9903\" returns successfully" Feb 12 19:24:46.937920 env[1211]: time="2024-02-12T19:24:46.937883505Z" level=info msg="StopPodSandbox for \"2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1\"" Feb 12 19:24:47.000062 env[1211]: 2024-02-12 19:24:46.969 [WARNING][3415] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89-k8s-nginx--deployment--8ffc5cf85--nsmp7-eth0", GenerateName:"nginx-deployment-8ffc5cf85-", Namespace:"default", SelfLink:"", UID:"5dabd81d-ad01-4f77-aeaa-b3a28fe0fd5b", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 24, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8ffc5cf85", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.89", ContainerID:"421dcd80865685c1c1f392291659b85bbda3f6c6b3173794b8284aaa3b9cf202", Pod:"nginx-deployment-8ffc5cf85-nsmp7", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.98.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calid0518dab8e5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:24:47.000062 env[1211]: 2024-02-12 19:24:46.970 [INFO][3415] k8s.go 578: Cleaning up netns ContainerID="2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1" Feb 12 19:24:47.000062 env[1211]: 2024-02-12 19:24:46.970 [INFO][3415] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1" iface="eth0" netns="" Feb 12 19:24:47.000062 env[1211]: 2024-02-12 19:24:46.970 [INFO][3415] k8s.go 585: Releasing IP address(es) ContainerID="2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1" Feb 12 19:24:47.000062 env[1211]: 2024-02-12 19:24:46.970 [INFO][3415] utils.go 188: Calico CNI releasing IP address ContainerID="2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1" Feb 12 19:24:47.000062 env[1211]: 2024-02-12 19:24:46.986 [INFO][3423] ipam_plugin.go 415: Releasing address using handleID ContainerID="2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1" HandleID="k8s-pod-network.2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1" Workload="10.0.0.89-k8s-nginx--deployment--8ffc5cf85--nsmp7-eth0" Feb 12 19:24:47.000062 env[1211]: 2024-02-12 19:24:46.987 [INFO][3423] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:24:47.000062 env[1211]: 2024-02-12 19:24:46.987 [INFO][3423] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:24:47.000062 env[1211]: 2024-02-12 19:24:46.996 [WARNING][3423] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1" HandleID="k8s-pod-network.2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1" Workload="10.0.0.89-k8s-nginx--deployment--8ffc5cf85--nsmp7-eth0" Feb 12 19:24:47.000062 env[1211]: 2024-02-12 19:24:46.996 [INFO][3423] ipam_plugin.go 443: Releasing address using workloadID ContainerID="2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1" HandleID="k8s-pod-network.2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1" Workload="10.0.0.89-k8s-nginx--deployment--8ffc5cf85--nsmp7-eth0" Feb 12 19:24:47.000062 env[1211]: 2024-02-12 19:24:46.998 [INFO][3423] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:24:47.000062 env[1211]: 2024-02-12 19:24:46.999 [INFO][3415] k8s.go 591: Teardown processing complete. ContainerID="2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1" Feb 12 19:24:47.000530 env[1211]: time="2024-02-12T19:24:47.000091563Z" level=info msg="TearDown network for sandbox \"2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1\" successfully" Feb 12 19:24:47.000530 env[1211]: time="2024-02-12T19:24:47.000123488Z" level=info msg="StopPodSandbox for \"2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1\" returns successfully" Feb 12 19:24:47.000619 env[1211]: time="2024-02-12T19:24:47.000568724Z" level=info msg="RemovePodSandbox for \"2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1\"" Feb 12 19:24:47.000654 env[1211]: time="2024-02-12T19:24:47.000616613Z" level=info msg="Forcibly stopping sandbox \"2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1\"" Feb 12 19:24:47.066351 env[1211]: 2024-02-12 19:24:47.033 [WARNING][3445] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89-k8s-nginx--deployment--8ffc5cf85--nsmp7-eth0", GenerateName:"nginx-deployment-8ffc5cf85-", Namespace:"default", SelfLink:"", UID:"5dabd81d-ad01-4f77-aeaa-b3a28fe0fd5b", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 24, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8ffc5cf85", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.89", ContainerID:"421dcd80865685c1c1f392291659b85bbda3f6c6b3173794b8284aaa3b9cf202", Pod:"nginx-deployment-8ffc5cf85-nsmp7", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.98.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calid0518dab8e5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:24:47.066351 env[1211]: 2024-02-12 19:24:47.033 [INFO][3445] k8s.go 578: Cleaning up netns ContainerID="2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1" Feb 12 19:24:47.066351 env[1211]: 2024-02-12 19:24:47.033 [INFO][3445] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1" iface="eth0" netns="" Feb 12 19:24:47.066351 env[1211]: 2024-02-12 19:24:47.033 [INFO][3445] k8s.go 585: Releasing IP address(es) ContainerID="2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1" Feb 12 19:24:47.066351 env[1211]: 2024-02-12 19:24:47.033 [INFO][3445] utils.go 188: Calico CNI releasing IP address ContainerID="2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1" Feb 12 19:24:47.066351 env[1211]: 2024-02-12 19:24:47.050 [INFO][3453] ipam_plugin.go 415: Releasing address using handleID ContainerID="2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1" HandleID="k8s-pod-network.2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1" Workload="10.0.0.89-k8s-nginx--deployment--8ffc5cf85--nsmp7-eth0" Feb 12 19:24:47.066351 env[1211]: 2024-02-12 19:24:47.050 [INFO][3453] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:24:47.066351 env[1211]: 2024-02-12 19:24:47.050 [INFO][3453] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:24:47.066351 env[1211]: 2024-02-12 19:24:47.062 [WARNING][3453] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1" HandleID="k8s-pod-network.2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1" Workload="10.0.0.89-k8s-nginx--deployment--8ffc5cf85--nsmp7-eth0" Feb 12 19:24:47.066351 env[1211]: 2024-02-12 19:24:47.062 [INFO][3453] ipam_plugin.go 443: Releasing address using workloadID ContainerID="2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1" HandleID="k8s-pod-network.2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1" Workload="10.0.0.89-k8s-nginx--deployment--8ffc5cf85--nsmp7-eth0" Feb 12 19:24:47.066351 env[1211]: 2024-02-12 19:24:47.064 [INFO][3453] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:24:47.066351 env[1211]: 2024-02-12 19:24:47.065 [INFO][3445] k8s.go 591: Teardown processing complete. ContainerID="2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1" Feb 12 19:24:47.066778 env[1211]: time="2024-02-12T19:24:47.066378350Z" level=info msg="TearDown network for sandbox \"2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1\" successfully" Feb 12 19:24:47.068751 env[1211]: time="2024-02-12T19:24:47.068700537Z" level=info msg="RemovePodSandbox \"2a2af36d2931ce51407825683a7eb1a34f39a967594d384c123492f412e87ad1\" returns successfully" Feb 12 19:24:47.818843 kubelet[1553]: E0212 19:24:47.818795 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:48.819409 kubelet[1553]: E0212 19:24:48.819354 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:49.819906 kubelet[1553]: E0212 19:24:49.819844 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:50.820404 kubelet[1553]: E0212 19:24:50.820358 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:51.820684 kubelet[1553]: E0212 19:24:51.820632 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:52.821095 kubelet[1553]: E0212 19:24:52.821038 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:53.821544 kubelet[1553]: E0212 19:24:53.821507 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:54.205919 kubelet[1553]: I0212 19:24:54.205816 1553 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:24:54.311747 kubelet[1553]: I0212 19:24:54.311709 1553 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4cx9\" (UniqueName: \"kubernetes.io/projected/5fa38949-81eb-46ed-aec4-4efa922a5fca-kube-api-access-n4cx9\") pod \"test-pod-1\" (UID: \"5fa38949-81eb-46ed-aec4-4efa922a5fca\") " pod="default/test-pod-1" Feb 12 19:24:54.311912 kubelet[1553]: I0212 19:24:54.311763 1553 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-be429b9d-8ce0-4594-94db-ae62d8655c8a\" (UniqueName: \"kubernetes.io/nfs/5fa38949-81eb-46ed-aec4-4efa922a5fca-pvc-be429b9d-8ce0-4594-94db-ae62d8655c8a\") pod \"test-pod-1\" (UID: \"5fa38949-81eb-46ed-aec4-4efa922a5fca\") " pod="default/test-pod-1" Feb 12 19:24:54.436000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.442976 kernel: Failed to create system directory netfs Feb 12 19:24:54.443034 kernel: audit: type=1400 audit(1707765894.436:278): avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.443054 kernel: Failed to create system directory netfs Feb 12 19:24:54.443069 kernel: audit: type=1400 audit(1707765894.436:278): avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.443089 kernel: Failed to create system directory netfs Feb 12 19:24:54.436000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.445108 kernel: audit: type=1400 audit(1707765894.436:278): avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.436000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.447268 kernel: Failed to create system directory netfs Feb 12 19:24:54.447525 kernel: audit: type=1400 audit(1707765894.436:278): avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.436000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.436000 audit[3470]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaaf3a7f5e0 a1=12c14 a2=aaaace8fe028 a3=aaaaf3a70010 items=0 ppid=581 pid=3470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:54.436000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 12 19:24:54.454316 kernel: audit: type=1300 audit(1707765894.436:278): arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaaf3a7f5e0 a1=12c14 a2=aaaace8fe028 a3=aaaaf3a70010 items=0 ppid=581 pid=3470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:54.454386 kernel: audit: type=1327 audit(1707765894.436:278): proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 12 19:24:54.452000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.461180 kernel: Failed to create system directory fscache Feb 12 19:24:54.461275 kernel: audit: type=1400 audit(1707765894.452:279): avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.461370 kernel: Failed to create system directory fscache Feb 12 19:24:54.461398 kernel: audit: type=1400 audit(1707765894.452:279): avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.452000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.452000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.466934 kernel: Failed to create system directory fscache Feb 12 19:24:54.466980 kernel: audit: type=1400 audit(1707765894.452:279): avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.467001 kernel: Failed to create system directory fscache Feb 12 19:24:54.467016 kernel: audit: type=1400 audit(1707765894.452:279): avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.452000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.452000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.470634 kernel: Failed to create system directory fscache Feb 12 19:24:54.470663 kernel: Failed to create system directory fscache Feb 12 19:24:54.452000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.452000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.471552 kernel: Failed to create system directory fscache Feb 12 19:24:54.471574 kernel: Failed to create system directory fscache Feb 12 19:24:54.452000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.452000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.472608 kernel: Failed to create system directory fscache Feb 12 19:24:54.472633 kernel: Failed to create system directory fscache Feb 12 19:24:54.452000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.452000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.473555 kernel: Failed to create system directory fscache Feb 12 19:24:54.473583 kernel: Failed to create system directory fscache Feb 12 19:24:54.452000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.452000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.452000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.475498 kernel: Failed to create system directory fscache Feb 12 19:24:54.475524 kernel: Failed to create system directory fscache Feb 12 19:24:54.476315 kernel: FS-Cache: Loaded Feb 12 19:24:54.452000 audit[3470]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaaf3c92210 a1=4c344 a2=aaaace8fe028 a3=aaaaf3a70010 items=0 ppid=581 pid=3470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:54.452000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.498752 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.498796 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.498811 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.499863 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.500618 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.500651 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.501507 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.501535 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.502380 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.502413 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.503645 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.503663 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.503683 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.504475 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.504490 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.505321 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.505336 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.506617 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.506633 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.506651 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.507477 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.507492 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.508335 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.508358 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.509633 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.509648 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.509669 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.510489 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.510504 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.511389 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.511409 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.512695 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.512717 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.512732 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.513578 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.513603 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.514420 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.514451 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.515723 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.515747 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.515766 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.516590 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.516611 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.517440 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.517455 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.518300 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.518316 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.519602 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.519618 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.519631 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.520451 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.520472 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.521309 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.521323 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.522614 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.522639 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.523474 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.523504 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.524310 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.524337 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.525766 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.525807 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.525821 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.526420 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.526446 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.527732 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.527748 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.527807 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.528632 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.528652 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.530551 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.530579 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.530598 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.531437 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.531453 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.532753 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.532771 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.532790 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.533598 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.533619 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.534444 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.534458 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.535305 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.535319 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.536654 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.536678 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.536691 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.537534 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.537572 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.538420 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.538434 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.539297 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.539316 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.540621 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.540636 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.540649 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.541497 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.541537 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.542379 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.542394 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.543736 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.543753 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.543765 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.544632 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.544657 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.545619 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.545693 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.546488 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.546511 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.547591 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.547636 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.548358 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.548392 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.549731 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.549755 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.549769 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.550661 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.550684 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.551610 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.551629 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.552459 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.552480 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.553297 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.553312 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.489000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.554543 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.554559 kernel: Failed to create system directory sunrpc Feb 12 19:24:54.559491 kernel: RPC: Registered named UNIX socket transport module. Feb 12 19:24:54.559568 kernel: RPC: Registered udp transport module. Feb 12 19:24:54.559587 kernel: RPC: Registered tcp transport module. Feb 12 19:24:54.559601 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 12 19:24:54.489000 audit[3470]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaaf3cde560 a1=fbb6c a2=aaaace8fe028 a3=aaaaf3a70010 items=6 ppid=581 pid=3470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:54.489000 audit: CWD cwd="/" Feb 12 19:24:54.489000 audit: PATH item=0 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:24:54.489000 audit: PATH item=1 name=(null) inode=19800 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:24:54.489000 audit: PATH item=2 name=(null) inode=19800 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:24:54.489000 audit: PATH item=3 name=(null) inode=19801 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:24:54.489000 audit: PATH item=4 name=(null) inode=19800 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:24:54.489000 audit: PATH item=5 name=(null) inode=19802 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:24:54.489000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.581317 kernel: Failed to create system directory nfs Feb 12 19:24:54.581367 kernel: Failed to create system directory nfs Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.582549 kernel: Failed to create system directory nfs Feb 12 19:24:54.582580 kernel: Failed to create system directory nfs Feb 12 19:24:54.582593 kernel: Failed to create system directory nfs Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.583317 kernel: Failed to create system directory nfs Feb 12 19:24:54.583335 kernel: Failed to create system directory nfs Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.584489 kernel: Failed to create system directory nfs Feb 12 19:24:54.584512 kernel: Failed to create system directory nfs Feb 12 19:24:54.584531 kernel: Failed to create system directory nfs Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.585710 kernel: Failed to create system directory nfs Feb 12 19:24:54.585733 kernel: Failed to create system directory nfs Feb 12 19:24:54.585761 kernel: Failed to create system directory nfs Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.586489 kernel: Failed to create system directory nfs Feb 12 19:24:54.586511 kernel: Failed to create system directory nfs Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.587313 kernel: Failed to create system directory nfs Feb 12 19:24:54.587365 kernel: Failed to create system directory nfs Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.588507 kernel: Failed to create system directory nfs Feb 12 19:24:54.588547 kernel: Failed to create system directory nfs Feb 12 19:24:54.588564 kernel: Failed to create system directory nfs Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.589696 kernel: Failed to create system directory nfs Feb 12 19:24:54.589728 kernel: Failed to create system directory nfs Feb 12 19:24:54.589742 kernel: Failed to create system directory nfs Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.590526 kernel: Failed to create system directory nfs Feb 12 19:24:54.590608 kernel: Failed to create system directory nfs Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.591319 kernel: Failed to create system directory nfs Feb 12 19:24:54.591335 kernel: Failed to create system directory nfs Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.592506 kernel: Failed to create system directory nfs Feb 12 19:24:54.592531 kernel: Failed to create system directory nfs Feb 12 19:24:54.592555 kernel: Failed to create system directory nfs Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.593691 kernel: Failed to create system directory nfs Feb 12 19:24:54.593707 kernel: Failed to create system directory nfs Feb 12 19:24:54.593719 kernel: Failed to create system directory nfs Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.594578 kernel: Failed to create system directory nfs Feb 12 19:24:54.594599 kernel: Failed to create system directory nfs Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.595437 kernel: Failed to create system directory nfs Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.596766 kernel: Failed to create system directory nfs Feb 12 19:24:54.596791 kernel: Failed to create system directory nfs Feb 12 19:24:54.596817 kernel: Failed to create system directory nfs Feb 12 19:24:54.596835 kernel: Failed to create system directory nfs Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.597588 kernel: Failed to create system directory nfs Feb 12 19:24:54.597634 kernel: Failed to create system directory nfs Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.598369 kernel: Failed to create system directory nfs Feb 12 19:24:54.598416 kernel: Failed to create system directory nfs Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.599541 kernel: Failed to create system directory nfs Feb 12 19:24:54.599562 kernel: Failed to create system directory nfs Feb 12 19:24:54.599720 kernel: Failed to create system directory nfs Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.600317 kernel: Failed to create system directory nfs Feb 12 19:24:54.600336 kernel: Failed to create system directory nfs Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.601482 kernel: Failed to create system directory nfs Feb 12 19:24:54.601502 kernel: Failed to create system directory nfs Feb 12 19:24:54.601516 kernel: Failed to create system directory nfs Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.602675 kernel: Failed to create system directory nfs Feb 12 19:24:54.602693 kernel: Failed to create system directory nfs Feb 12 19:24:54.602712 kernel: Failed to create system directory nfs Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.603458 kernel: Failed to create system directory nfs Feb 12 19:24:54.603473 kernel: Failed to create system directory nfs Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.575000 audit[3470]: AVC avc: denied { confidentiality } for pid=3470 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.604718 kernel: Failed to create system directory nfs Feb 12 19:24:54.604751 kernel: Failed to create system directory nfs Feb 12 19:24:54.615332 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 12 19:24:54.575000 audit[3470]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaaf3e0ab40 a1=ae35c a2=aaaace8fe028 a3=aaaaf3a70010 items=0 ppid=581 pid=3470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:54.575000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.640746 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.640783 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.640824 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.641644 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.641683 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.642504 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.642550 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.643377 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.643416 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.644596 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.644650 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.644668 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.645361 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.645428 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.646592 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.646625 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.646641 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.647412 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.647434 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.648703 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.648726 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.648739 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.649547 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.649575 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.650402 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.650417 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.651618 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.651633 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.651646 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.652442 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.652502 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.653636 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.653659 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.653673 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.654448 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.654465 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.655374 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.655396 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.656301 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.656316 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.657563 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.657597 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.657624 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.658540 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.658567 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.659334 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.659348 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.660535 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.660552 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.660570 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.661360 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.661374 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.662601 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.662671 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.662687 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.663395 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.663415 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.664622 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.664643 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.664657 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.665415 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.665435 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.666645 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.666664 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.666677 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.667456 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.667475 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.668693 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.668715 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.668733 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.669521 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.669540 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.670325 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.670341 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.671578 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.671594 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.671607 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.672417 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.672438 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.673666 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.673719 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.673735 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.674493 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.674525 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.675702 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.675719 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.675732 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.676511 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.676526 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.677328 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.677351 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.678569 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.678590 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.678605 kernel: Failed to create system directory nfs4 Feb 12 19:24:54.633000 audit[3475]: AVC avc: denied { confidentiality } for pid=3475 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.782631 kernel: NFS: Registering the id_resolver key type Feb 12 19:24:54.782975 kernel: Key type id_resolver registered Feb 12 19:24:54.782999 kernel: Key type id_legacy registered Feb 12 19:24:54.633000 audit[3475]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=ffff7fbbb010 a1=167c04 a2=aaaac06ee028 a3=aaaaf0550010 items=0 ppid=581 pid=3475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:54.633000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D006E66737634 Feb 12 19:24:54.789000 audit[3476]: AVC avc: denied { confidentiality } for pid=3476 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.789000 audit[3476]: AVC avc: denied { confidentiality } for pid=3476 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.789000 audit[3476]: AVC avc: denied { confidentiality } for pid=3476 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.789000 audit[3476]: AVC avc: denied { confidentiality } for pid=3476 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.793574 kernel: Failed to create system directory rpcgss Feb 12 19:24:54.793612 kernel: Failed to create system directory rpcgss Feb 12 19:24:54.793627 kernel: Failed to create system directory rpcgss Feb 12 19:24:54.793639 kernel: Failed to create system directory rpcgss Feb 12 19:24:54.793673 kernel: Failed to create system directory rpcgss Feb 12 19:24:54.789000 audit[3476]: AVC avc: denied { confidentiality } for pid=3476 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.789000 audit[3476]: AVC avc: denied { confidentiality } for pid=3476 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.794437 kernel: Failed to create system directory rpcgss Feb 12 19:24:54.789000 audit[3476]: AVC avc: denied { confidentiality } for pid=3476 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.789000 audit[3476]: AVC avc: denied { confidentiality } for pid=3476 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.795454 kernel: Failed to create system directory rpcgss Feb 12 19:24:54.795482 kernel: Failed to create system directory rpcgss Feb 12 19:24:54.795508 kernel: Failed to create system directory rpcgss Feb 12 19:24:54.789000 audit[3476]: AVC avc: denied { confidentiality } for pid=3476 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.789000 audit[3476]: AVC avc: denied { confidentiality } for pid=3476 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.796342 kernel: Failed to create system directory rpcgss Feb 12 19:24:54.796360 kernel: Failed to create system directory rpcgss Feb 12 19:24:54.789000 audit[3476]: AVC avc: denied { confidentiality } for pid=3476 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.789000 audit[3476]: AVC avc: denied { confidentiality } for pid=3476 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.797313 kernel: Failed to create system directory rpcgss Feb 12 19:24:54.789000 audit[3476]: AVC avc: denied { confidentiality } for pid=3476 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.789000 audit[3476]: AVC avc: denied { confidentiality } for pid=3476 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.798422 kernel: Failed to create system directory rpcgss Feb 12 19:24:54.798456 kernel: Failed to create system directory rpcgss Feb 12 19:24:54.798470 kernel: Failed to create system directory rpcgss Feb 12 19:24:54.789000 audit[3476]: AVC avc: denied { confidentiality } for pid=3476 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.789000 audit[3476]: AVC avc: denied { confidentiality } for pid=3476 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.799317 kernel: Failed to create system directory rpcgss Feb 12 19:24:54.789000 audit[3476]: AVC avc: denied { confidentiality } for pid=3476 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.789000 audit[3476]: AVC avc: denied { confidentiality } for pid=3476 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.800518 kernel: Failed to create system directory rpcgss Feb 12 19:24:54.800565 kernel: Failed to create system directory rpcgss Feb 12 19:24:54.800583 kernel: Failed to create system directory rpcgss Feb 12 19:24:54.789000 audit[3476]: AVC avc: denied { confidentiality } for pid=3476 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.789000 audit[3476]: AVC avc: denied { confidentiality } for pid=3476 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.801569 kernel: Failed to create system directory rpcgss Feb 12 19:24:54.801596 kernel: Failed to create system directory rpcgss Feb 12 19:24:54.789000 audit[3476]: AVC avc: denied { confidentiality } for pid=3476 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.789000 audit[3476]: AVC avc: denied { confidentiality } for pid=3476 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.802410 kernel: Failed to create system directory rpcgss Feb 12 19:24:54.802438 kernel: Failed to create system directory rpcgss Feb 12 19:24:54.789000 audit[3476]: AVC avc: denied { confidentiality } for pid=3476 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.789000 audit[3476]: AVC avc: denied { confidentiality } for pid=3476 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.789000 audit[3476]: AVC avc: denied { confidentiality } for pid=3476 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.803688 kernel: Failed to create system directory rpcgss Feb 12 19:24:54.803716 kernel: Failed to create system directory rpcgss Feb 12 19:24:54.803730 kernel: Failed to create system directory rpcgss Feb 12 19:24:54.789000 audit[3476]: AVC avc: denied { confidentiality } for pid=3476 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 12 19:24:54.789000 audit[3476]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=ffff906f3010 a1=3e09c a2=aaaad8b6e028 a3=aaaadef4a010 items=0 ppid=581 pid=3476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:54.789000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D007270632D617574682D36 Feb 12 19:24:54.822303 kubelet[1553]: E0212 19:24:54.822252 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:54.830769 nfsidmap[3483]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 12 19:24:54.833890 nfsidmap[3486]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 12 19:24:54.845000 audit[1]: AVC avc: denied { watch_reads } for pid=1 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2327 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 12 19:24:54.845000 audit[1289]: AVC avc: denied { watch_reads } for pid=1289 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2327 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 12 19:24:54.845000 audit[1289]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=d a1=aaaaf1442470 a2=10 a3=0 items=0 ppid=1 pid=1289 auid=4294967295 uid=500 gid=500 euid=500 suid=500 fsuid=500 egid=500 sgid=500 fsgid=500 tty=(none) ses=4294967295 comm="systemd" exe="/usr/lib/systemd/systemd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:54.845000 audit[1]: AVC avc: denied { watch_reads } for pid=1 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2327 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 12 19:24:54.845000 audit[1]: AVC avc: denied { watch_reads } for pid=1 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2327 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 12 19:24:54.845000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D64002D2D75736572 Feb 12 19:24:54.845000 audit[1289]: AVC avc: denied { watch_reads } for pid=1289 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2327 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 12 19:24:54.845000 audit[1289]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=d a1=aaaaf1442470 a2=10 a3=0 items=0 ppid=1 pid=1289 auid=4294967295 uid=500 gid=500 euid=500 suid=500 fsuid=500 egid=500 sgid=500 fsgid=500 tty=(none) ses=4294967295 comm="systemd" exe="/usr/lib/systemd/systemd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:54.845000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D64002D2D75736572 Feb 12 19:24:54.845000 audit[1289]: AVC avc: denied { watch_reads } for pid=1289 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2327 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 12 19:24:54.845000 audit[1289]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=d a1=aaaaf1442470 a2=10 a3=0 items=0 ppid=1 pid=1289 auid=4294967295 uid=500 gid=500 euid=500 suid=500 fsuid=500 egid=500 sgid=500 fsgid=500 tty=(none) ses=4294967295 comm="systemd" exe="/usr/lib/systemd/systemd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:54.845000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D64002D2D75736572 Feb 12 19:24:55.112205 env[1211]: time="2024-02-12T19:24:55.112144275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:5fa38949-81eb-46ed-aec4-4efa922a5fca,Namespace:default,Attempt:0,}" Feb 12 19:24:55.261901 systemd-networkd[1096]: cali5ec59c6bf6e: Link UP Feb 12 19:24:55.263458 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:24:55.263587 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali5ec59c6bf6e: link becomes ready Feb 12 19:24:55.263632 systemd-networkd[1096]: cali5ec59c6bf6e: Gained carrier Feb 12 19:24:55.272025 env[1211]: 2024-02-12 19:24:55.167 [INFO][3489] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.89-k8s-test--pod--1-eth0 default 5fa38949-81eb-46ed-aec4-4efa922a5fca 1158 0 2024-02-12 19:24:39 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.89 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="8b42970ffd93d06c6a456d784e2f10fee5cc7e7ea9d230f6318e9bd5a762b291" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.89-k8s-test--pod--1-" Feb 12 19:24:55.272025 env[1211]: 2024-02-12 19:24:55.167 [INFO][3489] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="8b42970ffd93d06c6a456d784e2f10fee5cc7e7ea9d230f6318e9bd5a762b291" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.89-k8s-test--pod--1-eth0" Feb 12 19:24:55.272025 env[1211]: 2024-02-12 19:24:55.200 [INFO][3502] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8b42970ffd93d06c6a456d784e2f10fee5cc7e7ea9d230f6318e9bd5a762b291" HandleID="k8s-pod-network.8b42970ffd93d06c6a456d784e2f10fee5cc7e7ea9d230f6318e9bd5a762b291" Workload="10.0.0.89-k8s-test--pod--1-eth0" Feb 12 19:24:55.272025 env[1211]: 2024-02-12 19:24:55.213 [INFO][3502] ipam_plugin.go 268: Auto assigning IP ContainerID="8b42970ffd93d06c6a456d784e2f10fee5cc7e7ea9d230f6318e9bd5a762b291" HandleID="k8s-pod-network.8b42970ffd93d06c6a456d784e2f10fee5cc7e7ea9d230f6318e9bd5a762b291" Workload="10.0.0.89-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cdb40), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.89", "pod":"test-pod-1", "timestamp":"2024-02-12 19:24:55.200010843 +0000 UTC"}, Hostname:"10.0.0.89", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 12 19:24:55.272025 env[1211]: 2024-02-12 19:24:55.213 [INFO][3502] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 19:24:55.272025 env[1211]: 2024-02-12 19:24:55.213 [INFO][3502] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 19:24:55.272025 env[1211]: 2024-02-12 19:24:55.213 [INFO][3502] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.89' Feb 12 19:24:55.272025 env[1211]: 2024-02-12 19:24:55.215 [INFO][3502] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8b42970ffd93d06c6a456d784e2f10fee5cc7e7ea9d230f6318e9bd5a762b291" host="10.0.0.89" Feb 12 19:24:55.272025 env[1211]: 2024-02-12 19:24:55.219 [INFO][3502] ipam.go 372: Looking up existing affinities for host host="10.0.0.89" Feb 12 19:24:55.272025 env[1211]: 2024-02-12 19:24:55.226 [INFO][3502] ipam.go 489: Trying affinity for 192.168.98.0/26 host="10.0.0.89" Feb 12 19:24:55.272025 env[1211]: 2024-02-12 19:24:55.233 [INFO][3502] ipam.go 155: Attempting to load block cidr=192.168.98.0/26 host="10.0.0.89" Feb 12 19:24:55.272025 env[1211]: 2024-02-12 19:24:55.236 [INFO][3502] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.98.0/26 host="10.0.0.89" Feb 12 19:24:55.272025 env[1211]: 2024-02-12 19:24:55.236 [INFO][3502] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.98.0/26 handle="k8s-pod-network.8b42970ffd93d06c6a456d784e2f10fee5cc7e7ea9d230f6318e9bd5a762b291" host="10.0.0.89" Feb 12 19:24:55.272025 env[1211]: 2024-02-12 19:24:55.238 [INFO][3502] ipam.go 1682: Creating new handle: k8s-pod-network.8b42970ffd93d06c6a456d784e2f10fee5cc7e7ea9d230f6318e9bd5a762b291 Feb 12 19:24:55.272025 env[1211]: 2024-02-12 19:24:55.247 [INFO][3502] ipam.go 1203: Writing block in order to claim IPs block=192.168.98.0/26 handle="k8s-pod-network.8b42970ffd93d06c6a456d784e2f10fee5cc7e7ea9d230f6318e9bd5a762b291" host="10.0.0.89" Feb 12 19:24:55.272025 env[1211]: 2024-02-12 19:24:55.253 [INFO][3502] ipam.go 1216: Successfully claimed IPs: [192.168.98.5/26] block=192.168.98.0/26 handle="k8s-pod-network.8b42970ffd93d06c6a456d784e2f10fee5cc7e7ea9d230f6318e9bd5a762b291" host="10.0.0.89" Feb 12 19:24:55.272025 env[1211]: 2024-02-12 19:24:55.253 [INFO][3502] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.98.5/26] handle="k8s-pod-network.8b42970ffd93d06c6a456d784e2f10fee5cc7e7ea9d230f6318e9bd5a762b291" host="10.0.0.89" Feb 12 19:24:55.272025 env[1211]: 2024-02-12 19:24:55.254 [INFO][3502] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 19:24:55.272025 env[1211]: 2024-02-12 19:24:55.254 [INFO][3502] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.98.5/26] IPv6=[] ContainerID="8b42970ffd93d06c6a456d784e2f10fee5cc7e7ea9d230f6318e9bd5a762b291" HandleID="k8s-pod-network.8b42970ffd93d06c6a456d784e2f10fee5cc7e7ea9d230f6318e9bd5a762b291" Workload="10.0.0.89-k8s-test--pod--1-eth0" Feb 12 19:24:55.272025 env[1211]: 2024-02-12 19:24:55.255 [INFO][3489] k8s.go 385: Populated endpoint ContainerID="8b42970ffd93d06c6a456d784e2f10fee5cc7e7ea9d230f6318e9bd5a762b291" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.89-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"5fa38949-81eb-46ed-aec4-4efa922a5fca", ResourceVersion:"1158", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.89", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.98.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:24:55.272025 env[1211]: 2024-02-12 19:24:55.255 [INFO][3489] k8s.go 386: Calico CNI using IPs: [192.168.98.5/32] ContainerID="8b42970ffd93d06c6a456d784e2f10fee5cc7e7ea9d230f6318e9bd5a762b291" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.89-k8s-test--pod--1-eth0" Feb 12 19:24:55.273236 env[1211]: 2024-02-12 19:24:55.255 [INFO][3489] dataplane_linux.go 68: Setting the host side veth name to cali5ec59c6bf6e ContainerID="8b42970ffd93d06c6a456d784e2f10fee5cc7e7ea9d230f6318e9bd5a762b291" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.89-k8s-test--pod--1-eth0" Feb 12 19:24:55.273236 env[1211]: 2024-02-12 19:24:55.263 [INFO][3489] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="8b42970ffd93d06c6a456d784e2f10fee5cc7e7ea9d230f6318e9bd5a762b291" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.89-k8s-test--pod--1-eth0" Feb 12 19:24:55.273236 env[1211]: 2024-02-12 19:24:55.264 [INFO][3489] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="8b42970ffd93d06c6a456d784e2f10fee5cc7e7ea9d230f6318e9bd5a762b291" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.89-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.89-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"5fa38949-81eb-46ed-aec4-4efa922a5fca", ResourceVersion:"1158", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 19, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.89", ContainerID:"8b42970ffd93d06c6a456d784e2f10fee5cc7e7ea9d230f6318e9bd5a762b291", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.98.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"8e:77:45:2b:a0:7e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 19:24:55.273236 env[1211]: 2024-02-12 19:24:55.270 [INFO][3489] k8s.go 491: Wrote updated endpoint to datastore ContainerID="8b42970ffd93d06c6a456d784e2f10fee5cc7e7ea9d230f6318e9bd5a762b291" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.89-k8s-test--pod--1-eth0" Feb 12 19:24:55.283000 audit[3523]: NETFILTER_CFG table=filter:101 family=2 entries=48 op=nft_register_chain pid=3523 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 19:24:55.283000 audit[3523]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23120 a0=3 a1=ffffe7cd22f0 a2=0 a3=ffffa7209fa8 items=0 ppid=2264 pid=3523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:24:55.283000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 19:24:55.291321 env[1211]: time="2024-02-12T19:24:55.291214783Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:24:55.291321 env[1211]: time="2024-02-12T19:24:55.291266310Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:24:55.291321 env[1211]: time="2024-02-12T19:24:55.291277512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:24:55.291678 env[1211]: time="2024-02-12T19:24:55.291638841Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8b42970ffd93d06c6a456d784e2f10fee5cc7e7ea9d230f6318e9bd5a762b291 pid=3532 runtime=io.containerd.runc.v2 Feb 12 19:24:55.358397 systemd-resolved[1154]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 19:24:55.375576 env[1211]: time="2024-02-12T19:24:55.375454739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:5fa38949-81eb-46ed-aec4-4efa922a5fca,Namespace:default,Attempt:0,} returns sandbox id \"8b42970ffd93d06c6a456d784e2f10fee5cc7e7ea9d230f6318e9bd5a762b291\"" Feb 12 19:24:55.377249 env[1211]: time="2024-02-12T19:24:55.377200696Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 12 19:24:55.775071 env[1211]: time="2024-02-12T19:24:55.774965891Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:55.776344 env[1211]: time="2024-02-12T19:24:55.776318315Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:55.778162 env[1211]: time="2024-02-12T19:24:55.778130041Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:55.779904 env[1211]: time="2024-02-12T19:24:55.779867557Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:24:55.780627 env[1211]: time="2024-02-12T19:24:55.780602536Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b\"" Feb 12 19:24:55.783082 env[1211]: time="2024-02-12T19:24:55.783040387Z" level=info msg="CreateContainer within sandbox \"8b42970ffd93d06c6a456d784e2f10fee5cc7e7ea9d230f6318e9bd5a762b291\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 12 19:24:55.793482 env[1211]: time="2024-02-12T19:24:55.793434238Z" level=info msg="CreateContainer within sandbox \"8b42970ffd93d06c6a456d784e2f10fee5cc7e7ea9d230f6318e9bd5a762b291\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"8694fb8ace4928fabe76029d29af3880384e49bb78256b72db48b35b8b900a29\"" Feb 12 19:24:55.794398 env[1211]: time="2024-02-12T19:24:55.794358244Z" level=info msg="StartContainer for \"8694fb8ace4928fabe76029d29af3880384e49bb78256b72db48b35b8b900a29\"" Feb 12 19:24:55.823313 kubelet[1553]: E0212 19:24:55.822596 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:55.849446 env[1211]: time="2024-02-12T19:24:55.849396395Z" level=info msg="StartContainer for \"8694fb8ace4928fabe76029d29af3880384e49bb78256b72db48b35b8b900a29\" returns successfully" Feb 12 19:24:56.089060 kubelet[1553]: I0212 19:24:56.089029 1553 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=-9.223372019765789e+09 pod.CreationTimestamp="2024-02-12 19:24:39 +0000 UTC" firstStartedPulling="2024-02-12 19:24:55.376743354 +0000 UTC m=+69.621171053" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:24:56.088980672 +0000 UTC m=+70.333408331" watchObservedRunningTime="2024-02-12 19:24:56.088986113 +0000 UTC m=+70.333413812" Feb 12 19:24:56.335459 systemd-networkd[1096]: cali5ec59c6bf6e: Gained IPv6LL Feb 12 19:24:56.823585 kubelet[1553]: E0212 19:24:56.823540 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:57.824257 kubelet[1553]: E0212 19:24:57.824213 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:58.824869 kubelet[1553]: E0212 19:24:58.824821 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:24:59.825272 kubelet[1553]: E0212 19:24:59.825223 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:00.831559 kubelet[1553]: E0212 19:25:00.831510 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:25:01.831770 kubelet[1553]: E0212 19:25:01.831718 1553 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"