Feb 9 19:25:06.990762 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Feb 9 17:23:38 -00 2024 Feb 9 19:25:06.990799 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:25:06.990819 kernel: BIOS-provided physical RAM map: Feb 9 19:25:06.990832 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 9 19:25:06.990845 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 9 19:25:06.990857 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 9 19:25:06.990871 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Feb 9 19:25:06.990885 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Feb 9 19:25:06.990900 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 9 19:25:06.990912 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 9 19:25:06.990925 kernel: NX (Execute Disable) protection: active Feb 9 19:25:06.990936 kernel: SMBIOS 2.8 present. Feb 9 19:25:06.990949 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Feb 9 19:25:06.990961 kernel: Hypervisor detected: KVM Feb 9 19:25:06.990976 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 9 19:25:06.990992 kernel: kvm-clock: cpu 0, msr 13faa001, primary cpu clock Feb 9 19:25:06.991005 kernel: kvm-clock: using sched offset of 4805081857 cycles Feb 9 19:25:06.991019 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 9 19:25:06.991033 kernel: tsc: Detected 1996.249 MHz processor Feb 9 19:25:06.991047 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 19:25:06.991062 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 19:25:06.991075 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Feb 9 19:25:06.991089 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 19:25:06.991105 kernel: ACPI: Early table checksum verification disabled Feb 9 19:25:06.991119 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Feb 9 19:25:06.991132 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:25:06.991146 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:25:06.991159 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:25:06.991173 kernel: ACPI: FACS 0x000000007FFE0000 000040 Feb 9 19:25:06.991186 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:25:06.991200 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:25:06.991213 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Feb 9 19:25:06.992311 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Feb 9 19:25:06.992329 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Feb 9 19:25:06.992343 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Feb 9 19:25:06.992356 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Feb 9 19:25:06.992369 kernel: No NUMA configuration found Feb 9 19:25:06.992383 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Feb 9 19:25:06.992396 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Feb 9 19:25:06.992410 kernel: Zone ranges: Feb 9 19:25:06.992435 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 19:25:06.992449 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Feb 9 19:25:06.992463 kernel: Normal empty Feb 9 19:25:06.992478 kernel: Movable zone start for each node Feb 9 19:25:06.992491 kernel: Early memory node ranges Feb 9 19:25:06.992505 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 9 19:25:06.992522 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Feb 9 19:25:06.992536 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Feb 9 19:25:06.992550 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 19:25:06.992564 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 9 19:25:06.992578 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Feb 9 19:25:06.992592 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 9 19:25:06.992606 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 9 19:25:06.992621 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 9 19:25:06.992635 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 9 19:25:06.992651 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 9 19:25:06.992665 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 19:25:06.992679 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 9 19:25:06.992693 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 9 19:25:06.992707 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 19:25:06.992721 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 9 19:25:06.992735 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Feb 9 19:25:06.992749 kernel: Booting paravirtualized kernel on KVM Feb 9 19:25:06.992764 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 19:25:06.992778 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 9 19:25:06.992795 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 9 19:25:06.992809 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 9 19:25:06.992823 kernel: pcpu-alloc: [0] 0 1 Feb 9 19:25:06.992837 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Feb 9 19:25:06.992851 kernel: kvm-guest: PV spinlocks disabled, no host support Feb 9 19:25:06.992865 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Feb 9 19:25:06.992879 kernel: Policy zone: DMA32 Feb 9 19:25:06.992895 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:25:06.992913 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 19:25:06.992927 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 19:25:06.992942 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 9 19:25:06.992956 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 19:25:06.992971 kernel: Memory: 1975340K/2096620K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 121020K reserved, 0K cma-reserved) Feb 9 19:25:06.992985 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 19:25:06.992999 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 19:25:06.993013 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 19:25:06.993029 kernel: rcu: Hierarchical RCU implementation. Feb 9 19:25:06.993045 kernel: rcu: RCU event tracing is enabled. Feb 9 19:25:06.993059 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 19:25:06.993074 kernel: Rude variant of Tasks RCU enabled. Feb 9 19:25:06.993088 kernel: Tracing variant of Tasks RCU enabled. Feb 9 19:25:06.993102 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 19:25:06.993117 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 19:25:06.993131 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 9 19:25:06.993144 kernel: Console: colour VGA+ 80x25 Feb 9 19:25:06.993161 kernel: printk: console [tty0] enabled Feb 9 19:25:06.993175 kernel: printk: console [ttyS0] enabled Feb 9 19:25:06.993189 kernel: ACPI: Core revision 20210730 Feb 9 19:25:06.993203 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 19:25:06.993217 kernel: x2apic enabled Feb 9 19:25:06.993255 kernel: Switched APIC routing to physical x2apic. Feb 9 19:25:06.993270 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 9 19:25:06.993284 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 9 19:25:06.993299 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Feb 9 19:25:06.993313 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 9 19:25:06.993331 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 9 19:25:06.993345 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 19:25:06.993359 kernel: Spectre V2 : Mitigation: Retpolines Feb 9 19:25:06.993374 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 19:25:06.993388 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 9 19:25:06.993402 kernel: Speculative Store Bypass: Vulnerable Feb 9 19:25:06.993416 kernel: x86/fpu: x87 FPU will use FXSAVE Feb 9 19:25:06.993430 kernel: Freeing SMP alternatives memory: 32K Feb 9 19:25:06.993444 kernel: pid_max: default: 32768 minimum: 301 Feb 9 19:25:06.993460 kernel: LSM: Security Framework initializing Feb 9 19:25:06.993474 kernel: SELinux: Initializing. Feb 9 19:25:06.993488 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 9 19:25:06.993502 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 9 19:25:06.993517 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Feb 9 19:25:06.993531 kernel: Performance Events: AMD PMU driver. Feb 9 19:25:06.993545 kernel: ... version: 0 Feb 9 19:25:06.993559 kernel: ... bit width: 48 Feb 9 19:25:06.993573 kernel: ... generic registers: 4 Feb 9 19:25:06.993598 kernel: ... value mask: 0000ffffffffffff Feb 9 19:25:06.993613 kernel: ... max period: 00007fffffffffff Feb 9 19:25:06.993629 kernel: ... fixed-purpose events: 0 Feb 9 19:25:06.993644 kernel: ... event mask: 000000000000000f Feb 9 19:25:06.993659 kernel: signal: max sigframe size: 1440 Feb 9 19:25:06.993673 kernel: rcu: Hierarchical SRCU implementation. Feb 9 19:25:06.993688 kernel: smp: Bringing up secondary CPUs ... Feb 9 19:25:06.993703 kernel: x86: Booting SMP configuration: Feb 9 19:25:06.993720 kernel: .... node #0, CPUs: #1 Feb 9 19:25:06.993735 kernel: kvm-clock: cpu 1, msr 13faa041, secondary cpu clock Feb 9 19:25:06.993750 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Feb 9 19:25:06.993765 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 19:25:06.993779 kernel: smpboot: Max logical packages: 2 Feb 9 19:25:06.993794 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Feb 9 19:25:06.993809 kernel: devtmpfs: initialized Feb 9 19:25:06.993823 kernel: x86/mm: Memory block size: 128MB Feb 9 19:25:06.993838 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 19:25:06.993856 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 19:25:06.993870 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 19:25:06.993885 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 19:25:06.993900 kernel: audit: initializing netlink subsys (disabled) Feb 9 19:25:06.993915 kernel: audit: type=2000 audit(1707506706.462:1): state=initialized audit_enabled=0 res=1 Feb 9 19:25:06.993944 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 19:25:06.993960 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 19:25:06.993974 kernel: cpuidle: using governor menu Feb 9 19:25:06.993989 kernel: ACPI: bus type PCI registered Feb 9 19:25:06.994006 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 19:25:06.994021 kernel: dca service started, version 1.12.1 Feb 9 19:25:06.994036 kernel: PCI: Using configuration type 1 for base access Feb 9 19:25:06.994051 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 19:25:06.994066 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 19:25:06.994081 kernel: ACPI: Added _OSI(Module Device) Feb 9 19:25:06.994096 kernel: ACPI: Added _OSI(Processor Device) Feb 9 19:25:06.994110 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 19:25:06.994125 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 19:25:06.994142 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 19:25:06.994157 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 19:25:06.994172 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 19:25:06.994186 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 19:25:06.994201 kernel: ACPI: Interpreter enabled Feb 9 19:25:06.994216 kernel: ACPI: PM: (supports S0 S3 S5) Feb 9 19:25:06.994249 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 19:25:06.994264 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 19:25:06.994279 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 9 19:25:06.994297 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 9 19:25:06.994619 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 9 19:25:06.994714 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Feb 9 19:25:06.994727 kernel: acpiphp: Slot [3] registered Feb 9 19:25:06.994735 kernel: acpiphp: Slot [4] registered Feb 9 19:25:06.994743 kernel: acpiphp: Slot [5] registered Feb 9 19:25:06.994751 kernel: acpiphp: Slot [6] registered Feb 9 19:25:06.994761 kernel: acpiphp: Slot [7] registered Feb 9 19:25:06.994769 kernel: acpiphp: Slot [8] registered Feb 9 19:25:06.994777 kernel: acpiphp: Slot [9] registered Feb 9 19:25:06.994785 kernel: acpiphp: Slot [10] registered Feb 9 19:25:06.994793 kernel: acpiphp: Slot [11] registered Feb 9 19:25:06.994801 kernel: acpiphp: Slot [12] registered Feb 9 19:25:06.994809 kernel: acpiphp: Slot [13] registered Feb 9 19:25:06.994816 kernel: acpiphp: Slot [14] registered Feb 9 19:25:06.994824 kernel: acpiphp: Slot [15] registered Feb 9 19:25:06.994832 kernel: acpiphp: Slot [16] registered Feb 9 19:25:06.994841 kernel: acpiphp: Slot [17] registered Feb 9 19:25:06.994848 kernel: acpiphp: Slot [18] registered Feb 9 19:25:06.994856 kernel: acpiphp: Slot [19] registered Feb 9 19:25:06.994864 kernel: acpiphp: Slot [20] registered Feb 9 19:25:06.994872 kernel: acpiphp: Slot [21] registered Feb 9 19:25:06.994879 kernel: acpiphp: Slot [22] registered Feb 9 19:25:06.994887 kernel: acpiphp: Slot [23] registered Feb 9 19:25:06.994895 kernel: acpiphp: Slot [24] registered Feb 9 19:25:06.994903 kernel: acpiphp: Slot [25] registered Feb 9 19:25:06.994912 kernel: acpiphp: Slot [26] registered Feb 9 19:25:06.994920 kernel: acpiphp: Slot [27] registered Feb 9 19:25:06.994927 kernel: acpiphp: Slot [28] registered Feb 9 19:25:06.994935 kernel: acpiphp: Slot [29] registered Feb 9 19:25:06.994943 kernel: acpiphp: Slot [30] registered Feb 9 19:25:06.994951 kernel: acpiphp: Slot [31] registered Feb 9 19:25:06.994958 kernel: PCI host bridge to bus 0000:00 Feb 9 19:25:06.995057 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 9 19:25:06.995132 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 9 19:25:06.995207 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 9 19:25:06.995297 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 9 19:25:06.995371 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Feb 9 19:25:06.995442 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 9 19:25:06.995539 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 9 19:25:06.995634 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 9 19:25:06.995752 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 9 19:25:06.995843 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Feb 9 19:25:06.995927 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 9 19:25:06.996008 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 9 19:25:06.996091 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 9 19:25:06.996174 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 9 19:25:07.003353 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 9 19:25:07.003453 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 9 19:25:07.003540 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 9 19:25:07.003687 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Feb 9 19:25:07.003777 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Feb 9 19:25:07.003861 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Feb 9 19:25:07.003944 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Feb 9 19:25:07.004031 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Feb 9 19:25:07.004112 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 9 19:25:07.004202 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 9 19:25:07.004303 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Feb 9 19:25:07.004385 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Feb 9 19:25:07.004467 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Feb 9 19:25:07.004549 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Feb 9 19:25:07.004643 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 9 19:25:07.004725 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 9 19:25:07.004828 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Feb 9 19:25:07.004917 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Feb 9 19:25:07.005010 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Feb 9 19:25:07.005091 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Feb 9 19:25:07.005172 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Feb 9 19:25:07.005284 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Feb 9 19:25:07.005369 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Feb 9 19:25:07.005450 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Feb 9 19:25:07.005462 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 9 19:25:07.005471 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 9 19:25:07.005479 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 9 19:25:07.005487 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 9 19:25:07.005496 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 9 19:25:07.005507 kernel: iommu: Default domain type: Translated Feb 9 19:25:07.005515 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 19:25:07.005595 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 9 19:25:07.005677 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 9 19:25:07.005758 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 9 19:25:07.005770 kernel: vgaarb: loaded Feb 9 19:25:07.005778 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 19:25:07.005787 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 19:25:07.005795 kernel: PTP clock support registered Feb 9 19:25:07.005806 kernel: PCI: Using ACPI for IRQ routing Feb 9 19:25:07.005814 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 9 19:25:07.005822 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 9 19:25:07.005830 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Feb 9 19:25:07.005837 kernel: clocksource: Switched to clocksource kvm-clock Feb 9 19:25:07.005845 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 19:25:07.005853 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 19:25:07.005861 kernel: pnp: PnP ACPI init Feb 9 19:25:07.005975 kernel: pnp 00:03: [dma 2] Feb 9 19:25:07.005992 kernel: pnp: PnP ACPI: found 5 devices Feb 9 19:25:07.006001 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 19:25:07.006009 kernel: NET: Registered PF_INET protocol family Feb 9 19:25:07.006017 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 19:25:07.006025 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 9 19:25:07.006033 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 19:25:07.006041 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 9 19:25:07.006049 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 9 19:25:07.006059 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 9 19:25:07.006067 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 9 19:25:07.006075 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 9 19:25:07.006083 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 19:25:07.006091 kernel: NET: Registered PF_XDP protocol family Feb 9 19:25:07.006164 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 9 19:25:07.006256 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 9 19:25:07.006331 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 9 19:25:07.006402 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 9 19:25:07.006477 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Feb 9 19:25:07.006557 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 9 19:25:07.006640 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 9 19:25:07.006720 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 9 19:25:07.006732 kernel: PCI: CLS 0 bytes, default 64 Feb 9 19:25:07.006740 kernel: Initialise system trusted keyrings Feb 9 19:25:07.006748 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 9 19:25:07.006759 kernel: Key type asymmetric registered Feb 9 19:25:07.006767 kernel: Asymmetric key parser 'x509' registered Feb 9 19:25:07.006775 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 19:25:07.006783 kernel: io scheduler mq-deadline registered Feb 9 19:25:07.006791 kernel: io scheduler kyber registered Feb 9 19:25:07.006799 kernel: io scheduler bfq registered Feb 9 19:25:07.006807 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 19:25:07.006815 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Feb 9 19:25:07.006824 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 9 19:25:07.006832 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 9 19:25:07.006841 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 9 19:25:07.006849 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 19:25:07.006857 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 19:25:07.006865 kernel: random: crng init done Feb 9 19:25:07.006873 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 9 19:25:07.006881 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 9 19:25:07.006889 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 9 19:25:07.006977 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 9 19:25:07.006993 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 9 19:25:07.007066 kernel: rtc_cmos 00:04: registered as rtc0 Feb 9 19:25:07.007139 kernel: rtc_cmos 00:04: setting system clock to 2024-02-09T19:25:06 UTC (1707506706) Feb 9 19:25:07.007212 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Feb 9 19:25:07.007241 kernel: NET: Registered PF_INET6 protocol family Feb 9 19:25:07.007251 kernel: Segment Routing with IPv6 Feb 9 19:25:07.007259 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 19:25:07.007266 kernel: NET: Registered PF_PACKET protocol family Feb 9 19:25:07.007274 kernel: Key type dns_resolver registered Feb 9 19:25:07.007285 kernel: IPI shorthand broadcast: enabled Feb 9 19:25:07.007293 kernel: sched_clock: Marking stable (697238203, 116310742)->(840866785, -27317840) Feb 9 19:25:07.007301 kernel: registered taskstats version 1 Feb 9 19:25:07.007309 kernel: Loading compiled-in X.509 certificates Feb 9 19:25:07.007317 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 56154408a02b3bd349a9e9180c9bd837fd1d636a' Feb 9 19:25:07.007325 kernel: Key type .fscrypt registered Feb 9 19:25:07.007333 kernel: Key type fscrypt-provisioning registered Feb 9 19:25:07.007341 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 19:25:07.007351 kernel: ima: Allocated hash algorithm: sha1 Feb 9 19:25:07.007359 kernel: ima: No architecture policies found Feb 9 19:25:07.007367 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 19:25:07.007375 kernel: Write protecting the kernel read-only data: 28672k Feb 9 19:25:07.007383 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 19:25:07.007391 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 19:25:07.007399 kernel: Run /init as init process Feb 9 19:25:07.007406 kernel: with arguments: Feb 9 19:25:07.007414 kernel: /init Feb 9 19:25:07.007424 kernel: with environment: Feb 9 19:25:07.007432 kernel: HOME=/ Feb 9 19:25:07.007439 kernel: TERM=linux Feb 9 19:25:07.007447 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 19:25:07.007458 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:25:07.007469 systemd[1]: Detected virtualization kvm. Feb 9 19:25:07.007478 systemd[1]: Detected architecture x86-64. Feb 9 19:25:07.007486 systemd[1]: Running in initrd. Feb 9 19:25:07.007498 systemd[1]: No hostname configured, using default hostname. Feb 9 19:25:07.007507 systemd[1]: Hostname set to . Feb 9 19:25:07.007516 systemd[1]: Initializing machine ID from VM UUID. Feb 9 19:25:07.007525 systemd[1]: Queued start job for default target initrd.target. Feb 9 19:25:07.007533 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:25:07.007542 systemd[1]: Reached target cryptsetup.target. Feb 9 19:25:07.007550 systemd[1]: Reached target paths.target. Feb 9 19:25:07.007559 systemd[1]: Reached target slices.target. Feb 9 19:25:07.007570 systemd[1]: Reached target swap.target. Feb 9 19:25:07.007578 systemd[1]: Reached target timers.target. Feb 9 19:25:07.007587 systemd[1]: Listening on iscsid.socket. Feb 9 19:25:07.007596 systemd[1]: Listening on iscsiuio.socket. Feb 9 19:25:07.007605 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:25:07.007613 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:25:07.007622 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:25:07.007633 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:25:07.007642 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:25:07.007650 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:25:07.007659 systemd[1]: Reached target sockets.target. Feb 9 19:25:07.007668 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:25:07.007686 systemd[1]: Finished network-cleanup.service. Feb 9 19:25:07.007697 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 19:25:07.007710 systemd[1]: Starting systemd-journald.service... Feb 9 19:25:07.007719 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:25:07.007727 systemd[1]: Starting systemd-resolved.service... Feb 9 19:25:07.007736 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 19:25:07.007745 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:25:07.007754 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 19:25:07.007766 systemd-journald[185]: Journal started Feb 9 19:25:07.007830 systemd-journald[185]: Runtime Journal (/run/log/journal/6ee487d2db1d480d9fa62531544af32f) is 4.9M, max 39.5M, 34.5M free. Feb 9 19:25:06.997986 systemd-modules-load[186]: Inserted module 'overlay' Feb 9 19:25:07.024517 systemd[1]: Started systemd-journald.service. Feb 9 19:25:07.024542 kernel: audit: type=1130 audit(1707506707.018:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:07.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:07.003904 systemd-resolved[187]: Positive Trust Anchors: Feb 9 19:25:07.003914 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:25:07.003951 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:25:07.038876 kernel: audit: type=1130 audit(1707506707.028:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:07.038904 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 19:25:07.038916 kernel: Bridge firewalling registered Feb 9 19:25:07.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:07.009700 systemd-resolved[187]: Defaulting to hostname 'linux'. Feb 9 19:25:07.050322 kernel: audit: type=1130 audit(1707506707.039:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:07.050343 kernel: audit: type=1130 audit(1707506707.043:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:07.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:07.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:07.029093 systemd[1]: Started systemd-resolved.service. Feb 9 19:25:07.038395 systemd-modules-load[186]: Inserted module 'br_netfilter' Feb 9 19:25:07.039735 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 19:25:07.043783 systemd[1]: Reached target nss-lookup.target. Feb 9 19:25:07.057396 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 19:25:07.058890 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:25:07.060636 kernel: SCSI subsystem initialized Feb 9 19:25:07.067374 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:25:07.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:07.072247 kernel: audit: type=1130 audit(1707506707.068:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:07.082211 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 19:25:07.086487 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 19:25:07.086553 kernel: device-mapper: uevent: version 1.0.3 Feb 9 19:25:07.086575 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 19:25:07.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:07.087363 systemd-modules-load[186]: Inserted module 'dm_multipath' Feb 9 19:25:07.091901 kernel: audit: type=1130 audit(1707506707.086:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:07.091276 systemd[1]: Starting dracut-cmdline.service... Feb 9 19:25:07.093331 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:25:07.100056 kernel: audit: type=1130 audit(1707506707.093:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:07.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:07.094748 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:25:07.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:07.105043 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:25:07.109444 kernel: audit: type=1130 audit(1707506707.105:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:07.110663 dracut-cmdline[205]: dracut-dracut-053 Feb 9 19:25:07.112691 dracut-cmdline[205]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:25:07.178366 kernel: Loading iSCSI transport class v2.0-870. Feb 9 19:25:07.192258 kernel: iscsi: registered transport (tcp) Feb 9 19:25:07.215473 kernel: iscsi: registered transport (qla4xxx) Feb 9 19:25:07.215585 kernel: QLogic iSCSI HBA Driver Feb 9 19:25:07.292396 systemd[1]: Finished dracut-cmdline.service. Feb 9 19:25:07.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:07.295705 systemd[1]: Starting dracut-pre-udev.service... Feb 9 19:25:07.305119 kernel: audit: type=1130 audit(1707506707.293:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:07.376347 kernel: raid6: sse2x4 gen() 13327 MB/s Feb 9 19:25:07.393325 kernel: raid6: sse2x4 xor() 7327 MB/s Feb 9 19:25:07.410303 kernel: raid6: sse2x2 gen() 14761 MB/s Feb 9 19:25:07.427324 kernel: raid6: sse2x2 xor() 8849 MB/s Feb 9 19:25:07.444322 kernel: raid6: sse2x1 gen() 11459 MB/s Feb 9 19:25:07.462016 kernel: raid6: sse2x1 xor() 6972 MB/s Feb 9 19:25:07.462086 kernel: raid6: using algorithm sse2x2 gen() 14761 MB/s Feb 9 19:25:07.462114 kernel: raid6: .... xor() 8849 MB/s, rmw enabled Feb 9 19:25:07.462893 kernel: raid6: using ssse3x2 recovery algorithm Feb 9 19:25:07.478751 kernel: xor: measuring software checksum speed Feb 9 19:25:07.478809 kernel: prefetch64-sse : 18464 MB/sec Feb 9 19:25:07.479274 kernel: generic_sse : 16819 MB/sec Feb 9 19:25:07.481149 kernel: xor: using function: prefetch64-sse (18464 MB/sec) Feb 9 19:25:07.597305 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 19:25:07.613746 systemd[1]: Finished dracut-pre-udev.service. Feb 9 19:25:07.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:07.614000 audit: BPF prog-id=7 op=LOAD Feb 9 19:25:07.614000 audit: BPF prog-id=8 op=LOAD Feb 9 19:25:07.615286 systemd[1]: Starting systemd-udevd.service... Feb 9 19:25:07.629446 systemd-udevd[386]: Using default interface naming scheme 'v252'. Feb 9 19:25:07.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:07.640877 systemd[1]: Started systemd-udevd.service. Feb 9 19:25:07.646398 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 19:25:07.662546 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Feb 9 19:25:07.714531 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 19:25:07.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:07.719048 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:25:07.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:07.760140 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:25:07.807581 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Feb 9 19:25:07.814963 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 19:25:07.814994 kernel: GPT:17805311 != 41943039 Feb 9 19:25:07.815006 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 19:25:07.816082 kernel: GPT:17805311 != 41943039 Feb 9 19:25:07.816832 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 19:25:07.818794 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 19:25:07.859248 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (446) Feb 9 19:25:07.864618 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 19:25:07.910432 kernel: libata version 3.00 loaded. Feb 9 19:25:07.910471 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 9 19:25:07.910746 kernel: scsi host0: ata_piix Feb 9 19:25:07.910965 kernel: scsi host1: ata_piix Feb 9 19:25:07.911161 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Feb 9 19:25:07.911184 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Feb 9 19:25:07.913529 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 19:25:07.914055 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 19:25:07.921913 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 19:25:07.925989 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:25:07.928359 systemd[1]: Starting disk-uuid.service... Feb 9 19:25:07.948257 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 19:25:07.951823 disk-uuid[462]: Primary Header is updated. Feb 9 19:25:07.951823 disk-uuid[462]: Secondary Entries is updated. Feb 9 19:25:07.951823 disk-uuid[462]: Secondary Header is updated. Feb 9 19:25:08.966299 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 19:25:08.967702 disk-uuid[464]: The operation has completed successfully. Feb 9 19:25:09.031285 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 19:25:09.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:09.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:09.031524 systemd[1]: Finished disk-uuid.service. Feb 9 19:25:09.051144 systemd[1]: Starting verity-setup.service... Feb 9 19:25:09.070944 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Feb 9 19:25:09.172777 systemd[1]: Found device dev-mapper-usr.device. Feb 9 19:25:09.177056 systemd[1]: Mounting sysusr-usr.mount... Feb 9 19:25:09.183559 systemd[1]: Finished verity-setup.service. Feb 9 19:25:09.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:09.316367 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 19:25:09.316596 systemd[1]: Mounted sysusr-usr.mount. Feb 9 19:25:09.317634 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 19:25:09.320064 systemd[1]: Starting ignition-setup.service... Feb 9 19:25:09.322595 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 19:25:09.335497 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:25:09.335533 kernel: BTRFS info (device vda6): using free space tree Feb 9 19:25:09.335544 kernel: BTRFS info (device vda6): has skinny extents Feb 9 19:25:09.352317 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 19:25:09.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:09.364998 systemd[1]: Finished ignition-setup.service. Feb 9 19:25:09.366370 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 19:25:09.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:09.459000 audit: BPF prog-id=9 op=LOAD Feb 9 19:25:09.458363 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 19:25:09.460195 systemd[1]: Starting systemd-networkd.service... Feb 9 19:25:09.484480 systemd-networkd[634]: lo: Link UP Feb 9 19:25:09.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:09.484492 systemd-networkd[634]: lo: Gained carrier Feb 9 19:25:09.484964 systemd-networkd[634]: Enumeration completed Feb 9 19:25:09.485180 systemd-networkd[634]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:25:09.485451 systemd[1]: Started systemd-networkd.service. Feb 9 19:25:09.486330 systemd[1]: Reached target network.target. Feb 9 19:25:09.486475 systemd-networkd[634]: eth0: Link UP Feb 9 19:25:09.486480 systemd-networkd[634]: eth0: Gained carrier Feb 9 19:25:09.488066 systemd[1]: Starting iscsiuio.service... Feb 9 19:25:09.495730 systemd[1]: Started iscsiuio.service. Feb 9 19:25:09.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:09.497284 systemd[1]: Starting iscsid.service... Feb 9 19:25:09.501041 iscsid[644]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:25:09.501041 iscsid[644]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 19:25:09.501041 iscsid[644]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 19:25:09.501041 iscsid[644]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 19:25:09.501041 iscsid[644]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:25:09.501041 iscsid[644]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 19:25:09.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:09.502318 systemd[1]: Started iscsid.service. Feb 9 19:25:09.503863 systemd-networkd[634]: eth0: DHCPv4 address 172.24.4.217/24, gateway 172.24.4.1 acquired from 172.24.4.1 Feb 9 19:25:09.507322 systemd[1]: Starting dracut-initqueue.service... Feb 9 19:25:09.523716 systemd[1]: Finished dracut-initqueue.service. Feb 9 19:25:09.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:09.525479 systemd[1]: Reached target remote-fs-pre.target. Feb 9 19:25:09.526571 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:25:09.527642 systemd[1]: Reached target remote-fs.target. Feb 9 19:25:09.529568 systemd[1]: Starting dracut-pre-mount.service... Feb 9 19:25:09.541418 systemd[1]: Finished dracut-pre-mount.service. Feb 9 19:25:09.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:09.666707 ignition[547]: Ignition 2.14.0 Feb 9 19:25:09.666750 ignition[547]: Stage: fetch-offline Feb 9 19:25:09.666866 ignition[547]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:25:09.666912 ignition[547]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 9 19:25:09.669187 ignition[547]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 9 19:25:09.669438 ignition[547]: parsed url from cmdline: "" Feb 9 19:25:09.672658 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 19:25:09.669448 ignition[547]: no config URL provided Feb 9 19:25:09.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:09.675780 systemd[1]: Starting ignition-fetch.service... Feb 9 19:25:09.669461 ignition[547]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:25:09.669478 ignition[547]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:25:09.669492 ignition[547]: failed to fetch config: resource requires networking Feb 9 19:25:09.670164 ignition[547]: Ignition finished successfully Feb 9 19:25:09.693767 ignition[658]: Ignition 2.14.0 Feb 9 19:25:09.693794 ignition[658]: Stage: fetch Feb 9 19:25:09.694084 ignition[658]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:25:09.694129 ignition[658]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 9 19:25:09.696407 ignition[658]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 9 19:25:09.696628 ignition[658]: parsed url from cmdline: "" Feb 9 19:25:09.696637 ignition[658]: no config URL provided Feb 9 19:25:09.696651 ignition[658]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:25:09.696669 ignition[658]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:25:09.703274 ignition[658]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Feb 9 19:25:09.703330 ignition[658]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Feb 9 19:25:09.703908 ignition[658]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Feb 9 19:25:10.023763 ignition[658]: GET result: OK Feb 9 19:25:10.023868 ignition[658]: parsing config with SHA512: b512dee88bc0a959d70ead2786b3c514b01c396660f195004cd60ba3cceb48228b3cfce4f8008e847d10488d6350951955ada3f46626b1f900663774659ed17b Feb 9 19:25:10.140203 unknown[658]: fetched base config from "system" Feb 9 19:25:10.140269 unknown[658]: fetched base config from "system" Feb 9 19:25:10.140285 unknown[658]: fetched user config from "openstack" Feb 9 19:25:10.142008 ignition[658]: fetch: fetch complete Feb 9 19:25:10.142021 ignition[658]: fetch: fetch passed Feb 9 19:25:10.146022 systemd[1]: Finished ignition-fetch.service. Feb 9 19:25:10.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:10.142107 ignition[658]: Ignition finished successfully Feb 9 19:25:10.149149 systemd[1]: Starting ignition-kargs.service... Feb 9 19:25:10.174590 ignition[664]: Ignition 2.14.0 Feb 9 19:25:10.174620 ignition[664]: Stage: kargs Feb 9 19:25:10.174857 ignition[664]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:25:10.174907 ignition[664]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 9 19:25:10.177113 ignition[664]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 9 19:25:10.180699 ignition[664]: kargs: kargs passed Feb 9 19:25:10.180797 ignition[664]: Ignition finished successfully Feb 9 19:25:10.183084 systemd[1]: Finished ignition-kargs.service. Feb 9 19:25:10.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:10.195337 systemd[1]: Starting ignition-disks.service... Feb 9 19:25:10.209578 ignition[669]: Ignition 2.14.0 Feb 9 19:25:10.209599 ignition[669]: Stage: disks Feb 9 19:25:10.209786 ignition[669]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:25:10.209818 ignition[669]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 9 19:25:10.211343 ignition[669]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 9 19:25:10.213847 ignition[669]: disks: disks passed Feb 9 19:25:10.213908 ignition[669]: Ignition finished successfully Feb 9 19:25:10.215496 systemd[1]: Finished ignition-disks.service. Feb 9 19:25:10.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:10.216194 systemd[1]: Reached target initrd-root-device.target. Feb 9 19:25:10.216673 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:25:10.217604 systemd[1]: Reached target local-fs.target. Feb 9 19:25:10.218538 systemd[1]: Reached target sysinit.target. Feb 9 19:25:10.219456 systemd[1]: Reached target basic.target. Feb 9 19:25:10.221147 systemd[1]: Starting systemd-fsck-root.service... Feb 9 19:25:10.240644 systemd-fsck[677]: ROOT: clean, 602/1628000 files, 124051/1617920 blocks Feb 9 19:25:10.248938 systemd[1]: Finished systemd-fsck-root.service. Feb 9 19:25:10.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:10.251123 systemd[1]: Mounting sysroot.mount... Feb 9 19:25:10.266180 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 19:25:10.265470 systemd[1]: Mounted sysroot.mount. Feb 9 19:25:10.267337 systemd[1]: Reached target initrd-root-fs.target. Feb 9 19:25:10.270833 systemd[1]: Mounting sysroot-usr.mount... Feb 9 19:25:10.273340 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 19:25:10.275931 systemd[1]: Starting flatcar-openstack-hostname.service... Feb 9 19:25:10.278183 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 19:25:10.279043 systemd[1]: Reached target ignition-diskful.target. Feb 9 19:25:10.283492 systemd[1]: Mounted sysroot-usr.mount. Feb 9 19:25:10.289136 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:25:10.290415 systemd[1]: Starting initrd-setup-root.service... Feb 9 19:25:10.298381 initrd-setup-root[689]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 19:25:10.314273 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (684) Feb 9 19:25:10.318592 initrd-setup-root[697]: cut: /sysroot/etc/group: No such file or directory Feb 9 19:25:10.324558 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:25:10.324585 kernel: BTRFS info (device vda6): using free space tree Feb 9 19:25:10.324596 kernel: BTRFS info (device vda6): has skinny extents Feb 9 19:25:10.329912 initrd-setup-root[721]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 19:25:10.336624 initrd-setup-root[731]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 19:25:10.339255 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:25:10.443128 systemd[1]: Finished initrd-setup-root.service. Feb 9 19:25:10.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:10.444616 systemd[1]: Starting ignition-mount.service... Feb 9 19:25:10.447998 systemd[1]: Starting sysroot-boot.service... Feb 9 19:25:10.459130 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 19:25:10.459297 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 19:25:10.488660 ignition[752]: INFO : Ignition 2.14.0 Feb 9 19:25:10.489469 ignition[752]: INFO : Stage: mount Feb 9 19:25:10.490260 ignition[752]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:25:10.491006 ignition[752]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 9 19:25:10.493262 coreos-metadata[683]: Feb 09 19:25:10.493 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 9 19:25:10.494760 ignition[752]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 9 19:25:10.496944 ignition[752]: INFO : mount: mount passed Feb 9 19:25:10.497559 ignition[752]: INFO : Ignition finished successfully Feb 9 19:25:10.499129 systemd[1]: Finished sysroot-boot.service. Feb 9 19:25:10.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:10.501132 systemd[1]: Finished ignition-mount.service. Feb 9 19:25:10.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:10.511256 coreos-metadata[683]: Feb 09 19:25:10.511 INFO Fetch successful Feb 9 19:25:10.511927 coreos-metadata[683]: Feb 09 19:25:10.511 INFO wrote hostname ci-3510-3-2-b-76a749f546.novalocal to /sysroot/etc/hostname Feb 9 19:25:10.514753 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Feb 9 19:25:10.514852 systemd[1]: Finished flatcar-openstack-hostname.service. Feb 9 19:25:10.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:10.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:10.516958 systemd[1]: Starting ignition-files.service... Feb 9 19:25:10.524436 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:25:10.534253 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (760) Feb 9 19:25:10.537786 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:25:10.537810 kernel: BTRFS info (device vda6): using free space tree Feb 9 19:25:10.537821 kernel: BTRFS info (device vda6): has skinny extents Feb 9 19:25:10.545523 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:25:10.555939 ignition[779]: INFO : Ignition 2.14.0 Feb 9 19:25:10.555939 ignition[779]: INFO : Stage: files Feb 9 19:25:10.556980 ignition[779]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:25:10.556980 ignition[779]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 9 19:25:10.558652 ignition[779]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 9 19:25:10.564236 ignition[779]: DEBUG : files: compiled without relabeling support, skipping Feb 9 19:25:10.565541 ignition[779]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 19:25:10.565541 ignition[779]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 19:25:10.572486 ignition[779]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 19:25:10.573338 ignition[779]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 19:25:10.575242 unknown[779]: wrote ssh authorized keys file for user: core Feb 9 19:25:10.575919 ignition[779]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 19:25:10.576608 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 19:25:10.576608 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 9 19:25:11.232104 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 19:25:11.384774 systemd-networkd[634]: eth0: Gained IPv6LL Feb 9 19:25:11.546115 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 19:25:11.546115 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 19:25:11.546115 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 9 19:25:12.112387 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 19:25:12.878471 ignition[779]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 9 19:25:12.878471 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 19:25:12.878471 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 19:25:12.886704 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 9 19:25:13.380020 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 19:25:13.855312 ignition[779]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 9 19:25:13.855312 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 19:25:13.855312 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 19:25:13.873278 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 19:25:13.873278 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:25:13.873278 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 9 19:25:14.004037 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 19:25:14.873962 ignition[779]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 9 19:25:14.873962 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:25:14.873962 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 19:25:14.881730 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubectl: attempt #1 Feb 9 19:25:14.983368 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 19:25:15.862134 ignition[779]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 97840854134909d75a1a2563628cc4ba632067369ce7fc8a8a1e90a387d32dd7bfd73f4f5b5a82ef842088e7470692951eb7fc869c5f297dd740f855672ee628 Feb 9 19:25:15.862134 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 19:25:15.862134 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:25:15.869428 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 9 19:25:15.971856 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 9 19:25:18.246170 ignition[779]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 9 19:25:18.247926 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:25:18.248829 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:25:18.249824 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:25:18.250684 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 9 19:25:18.251520 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 19:25:18.251520 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 19:25:18.251520 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 19:25:18.251520 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 19:25:18.254914 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 19:25:18.254914 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 19:25:18.254914 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 19:25:18.254914 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:25:18.254914 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:25:18.254914 ignition[779]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 9 19:25:18.254914 ignition[779]: INFO : files: op(10): op(11): [started] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata-sshkeys@.service.d/20-clct-provider-override.conf" Feb 9 19:25:18.254914 ignition[779]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata-sshkeys@.service.d/20-clct-provider-override.conf" Feb 9 19:25:18.254914 ignition[779]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 9 19:25:18.254914 ignition[779]: INFO : files: op(12): [started] processing unit "containerd.service" Feb 9 19:25:18.254914 ignition[779]: INFO : files: op(12): op(13): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 19:25:18.254914 ignition[779]: INFO : files: op(12): op(13): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 19:25:18.254914 ignition[779]: INFO : files: op(12): [finished] processing unit "containerd.service" Feb 9 19:25:18.254914 ignition[779]: INFO : files: op(14): [started] processing unit "prepare-cni-plugins.service" Feb 9 19:25:18.254914 ignition[779]: INFO : files: op(14): op(15): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:25:18.254914 ignition[779]: INFO : files: op(14): op(15): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:25:18.290468 kernel: kauditd_printk_skb: 27 callbacks suppressed Feb 9 19:25:18.290492 kernel: audit: type=1130 audit(1707506718.264:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.290560 ignition[779]: INFO : files: op(14): [finished] processing unit "prepare-cni-plugins.service" Feb 9 19:25:18.290560 ignition[779]: INFO : files: op(16): [started] processing unit "prepare-critools.service" Feb 9 19:25:18.290560 ignition[779]: INFO : files: op(16): op(17): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:25:18.290560 ignition[779]: INFO : files: op(16): op(17): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:25:18.290560 ignition[779]: INFO : files: op(16): [finished] processing unit "prepare-critools.service" Feb 9 19:25:18.290560 ignition[779]: INFO : files: op(18): [started] processing unit "prepare-helm.service" Feb 9 19:25:18.290560 ignition[779]: INFO : files: op(18): op(19): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 19:25:18.290560 ignition[779]: INFO : files: op(18): op(19): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 19:25:18.290560 ignition[779]: INFO : files: op(18): [finished] processing unit "prepare-helm.service" Feb 9 19:25:18.290560 ignition[779]: INFO : files: op(1a): [started] processing unit "coreos-metadata.service" Feb 9 19:25:18.290560 ignition[779]: INFO : files: op(1a): op(1b): [started] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/20-clct-provider-override.conf" Feb 9 19:25:18.290560 ignition[779]: INFO : files: op(1a): op(1b): [finished] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/20-clct-provider-override.conf" Feb 9 19:25:18.290560 ignition[779]: INFO : files: op(1a): [finished] processing unit "coreos-metadata.service" Feb 9 19:25:18.290560 ignition[779]: INFO : files: op(1c): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 19:25:18.290560 ignition[779]: INFO : files: op(1c): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 19:25:18.290560 ignition[779]: INFO : files: op(1d): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:25:18.290560 ignition[779]: INFO : files: op(1d): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:25:18.290560 ignition[779]: INFO : files: op(1e): [started] setting preset to enabled for "prepare-critools.service" Feb 9 19:25:18.322133 kernel: audit: type=1130 audit(1707506718.291:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.322154 kernel: audit: type=1131 audit(1707506718.291:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.322165 kernel: audit: type=1130 audit(1707506718.302:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.262109 systemd[1]: Finished ignition-files.service. Feb 9 19:25:18.322964 ignition[779]: INFO : files: op(1e): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 19:25:18.322964 ignition[779]: INFO : files: op(1f): [started] setting preset to enabled for "prepare-helm.service" Feb 9 19:25:18.322964 ignition[779]: INFO : files: op(1f): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 19:25:18.322964 ignition[779]: INFO : files: createResultFile: createFiles: op(20): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:25:18.322964 ignition[779]: INFO : files: createResultFile: createFiles: op(20): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:25:18.322964 ignition[779]: INFO : files: files passed Feb 9 19:25:18.322964 ignition[779]: INFO : Ignition finished successfully Feb 9 19:25:18.267050 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 19:25:18.328305 initrd-setup-root-after-ignition[804]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 19:25:18.280138 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 19:25:18.281606 systemd[1]: Starting ignition-quench.service... Feb 9 19:25:18.289379 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 19:25:18.289560 systemd[1]: Finished ignition-quench.service. Feb 9 19:25:18.301296 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 19:25:18.303399 systemd[1]: Reached target ignition-complete.target. Feb 9 19:25:18.309800 systemd[1]: Starting initrd-parse-etc.service... Feb 9 19:25:18.339428 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 19:25:18.339650 systemd[1]: Finished initrd-parse-etc.service. Feb 9 19:25:18.348155 kernel: audit: type=1130 audit(1707506718.340:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.348177 kernel: audit: type=1131 audit(1707506718.340:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.341530 systemd[1]: Reached target initrd-fs.target. Feb 9 19:25:18.349281 systemd[1]: Reached target initrd.target. Feb 9 19:25:18.350803 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 19:25:18.352512 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 19:25:18.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.367584 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 19:25:18.376703 kernel: audit: type=1130 audit(1707506718.367:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.376828 systemd[1]: Starting initrd-cleanup.service... Feb 9 19:25:18.391053 systemd[1]: Stopped target nss-lookup.target. Feb 9 19:25:18.392142 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 19:25:18.393187 systemd[1]: Stopped target timers.target. Feb 9 19:25:18.394178 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 19:25:18.394847 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 19:25:18.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.396005 systemd[1]: Stopped target initrd.target. Feb 9 19:25:18.404592 kernel: audit: type=1131 audit(1707506718.395:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.405875 systemd[1]: Stopped target basic.target. Feb 9 19:25:18.407571 systemd[1]: Stopped target ignition-complete.target. Feb 9 19:25:18.409379 systemd[1]: Stopped target ignition-diskful.target. Feb 9 19:25:18.411112 systemd[1]: Stopped target initrd-root-device.target. Feb 9 19:25:18.412924 systemd[1]: Stopped target remote-fs.target. Feb 9 19:25:18.414635 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 19:25:18.416411 systemd[1]: Stopped target sysinit.target. Feb 9 19:25:18.418114 systemd[1]: Stopped target local-fs.target. Feb 9 19:25:18.419835 systemd[1]: Stopped target local-fs-pre.target. Feb 9 19:25:18.421567 systemd[1]: Stopped target swap.target. Feb 9 19:25:18.422452 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 19:25:18.427303 kernel: audit: type=1131 audit(1707506718.423:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.423000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.422664 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 19:25:18.423733 systemd[1]: Stopped target cryptsetup.target. Feb 9 19:25:18.432207 kernel: audit: type=1131 audit(1707506718.428:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.428000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.427754 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 19:25:18.432000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.427854 systemd[1]: Stopped dracut-initqueue.service. Feb 9 19:25:18.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.428816 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 19:25:18.428962 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 19:25:18.442109 iscsid[644]: iscsid shutting down. Feb 9 19:25:18.432811 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 19:25:18.432949 systemd[1]: Stopped ignition-files.service. Feb 9 19:25:18.434582 systemd[1]: Stopping ignition-mount.service... Feb 9 19:25:18.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.447688 ignition[817]: INFO : Ignition 2.14.0 Feb 9 19:25:18.447688 ignition[817]: INFO : Stage: umount Feb 9 19:25:18.447688 ignition[817]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:25:18.447688 ignition[817]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 9 19:25:18.443869 systemd[1]: Stopping iscsid.service... Feb 9 19:25:18.453000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.454000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.459042 ignition[817]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 9 19:25:18.459042 ignition[817]: INFO : umount: umount passed Feb 9 19:25:18.459042 ignition[817]: INFO : Ignition finished successfully Feb 9 19:25:18.444346 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 19:25:18.444585 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 19:25:18.448207 systemd[1]: Stopping sysroot-boot.service... Feb 9 19:25:18.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.448885 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 19:25:18.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.449073 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 19:25:18.454158 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 19:25:18.454336 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 19:25:18.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.457571 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 19:25:18.457698 systemd[1]: Stopped iscsid.service. Feb 9 19:25:18.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.464015 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 19:25:18.464140 systemd[1]: Stopped ignition-mount.service. Feb 9 19:25:18.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.468142 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 19:25:18.468275 systemd[1]: Finished initrd-cleanup.service. Feb 9 19:25:18.475000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.471564 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 19:25:18.471618 systemd[1]: Stopped ignition-disks.service. Feb 9 19:25:18.472603 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 19:25:18.472643 systemd[1]: Stopped ignition-kargs.service. Feb 9 19:25:18.474468 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 19:25:18.474511 systemd[1]: Stopped ignition-fetch.service. Feb 9 19:25:18.475936 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 19:25:18.475990 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 19:25:18.476522 systemd[1]: Stopped target paths.target. Feb 9 19:25:18.476936 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 19:25:18.481400 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 19:25:18.485000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.482134 systemd[1]: Stopped target slices.target. Feb 9 19:25:18.482612 systemd[1]: Stopped target sockets.target. Feb 9 19:25:18.483625 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 19:25:18.483658 systemd[1]: Closed iscsid.socket. Feb 9 19:25:18.484563 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 19:25:18.484605 systemd[1]: Stopped ignition-setup.service. Feb 9 19:25:18.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.486528 systemd[1]: Stopping iscsiuio.service... Feb 9 19:25:18.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.489517 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 19:25:18.489894 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 19:25:18.490010 systemd[1]: Stopped iscsiuio.service. Feb 9 19:25:18.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.491287 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 19:25:18.491366 systemd[1]: Stopped sysroot-boot.service. Feb 9 19:25:18.492063 systemd[1]: Stopped target network.target. Feb 9 19:25:18.492816 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 19:25:18.492846 systemd[1]: Closed iscsiuio.socket. Feb 9 19:25:18.493656 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 19:25:18.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.493693 systemd[1]: Stopped initrd-setup-root.service. Feb 9 19:25:18.494982 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:25:18.503000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.495682 systemd[1]: Stopping systemd-resolved.service... Feb 9 19:25:18.505000 audit: BPF prog-id=6 op=UNLOAD Feb 9 19:25:18.498308 systemd-networkd[634]: eth0: DHCPv6 lease lost Feb 9 19:25:18.505000 audit: BPF prog-id=9 op=UNLOAD Feb 9 19:25:18.499373 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:25:18.499463 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:25:18.502156 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 19:25:18.502449 systemd[1]: Stopped systemd-resolved.service. Feb 9 19:25:18.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.504848 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 19:25:18.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.504898 systemd[1]: Closed systemd-networkd.socket. Feb 9 19:25:18.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.506502 systemd[1]: Stopping network-cleanup.service... Feb 9 19:25:18.508520 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 19:25:18.508577 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 19:25:18.513556 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:25:18.513614 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:25:18.514676 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 19:25:18.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.514718 systemd[1]: Stopped systemd-modules-load.service. Feb 9 19:25:18.515581 systemd[1]: Stopping systemd-udevd.service... Feb 9 19:25:18.518088 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 19:25:18.518668 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 19:25:18.518791 systemd[1]: Stopped systemd-udevd.service. Feb 9 19:25:18.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.521427 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 19:25:18.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.521471 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 19:25:18.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.524078 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 19:25:18.524107 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 19:25:18.525202 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 19:25:18.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.525268 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 19:25:18.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.526256 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 19:25:18.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.526298 systemd[1]: Stopped dracut-cmdline.service. Feb 9 19:25:18.527107 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 19:25:18.527142 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 19:25:18.528778 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 19:25:18.535062 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 19:25:18.535107 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 19:25:18.536338 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 19:25:18.536429 systemd[1]: Stopped network-cleanup.service. Feb 9 19:25:18.537141 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 19:25:18.537214 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 19:25:18.537994 systemd[1]: Reached target initrd-switch-root.target. Feb 9 19:25:18.539588 systemd[1]: Starting initrd-switch-root.service... Feb 9 19:25:18.548312 systemd[1]: Switching root. Feb 9 19:25:18.553000 audit: BPF prog-id=5 op=UNLOAD Feb 9 19:25:18.553000 audit: BPF prog-id=4 op=UNLOAD Feb 9 19:25:18.553000 audit: BPF prog-id=3 op=UNLOAD Feb 9 19:25:18.553000 audit: BPF prog-id=8 op=UNLOAD Feb 9 19:25:18.553000 audit: BPF prog-id=7 op=UNLOAD Feb 9 19:25:18.569267 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Feb 9 19:25:18.569370 systemd-journald[185]: Journal stopped Feb 9 19:25:22.723103 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 19:25:22.723150 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 19:25:22.723163 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 19:25:22.723176 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 19:25:22.723187 kernel: SELinux: policy capability open_perms=1 Feb 9 19:25:22.723201 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 19:25:22.723213 kernel: SELinux: policy capability always_check_network=0 Feb 9 19:25:22.723242 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 19:25:22.723255 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 19:25:22.723271 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 19:25:22.723286 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 19:25:22.723301 systemd[1]: Successfully loaded SELinux policy in 84.107ms. Feb 9 19:25:22.723320 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.285ms. Feb 9 19:25:22.723334 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:25:22.723349 systemd[1]: Detected virtualization kvm. Feb 9 19:25:22.723361 systemd[1]: Detected architecture x86-64. Feb 9 19:25:22.723374 systemd[1]: Detected first boot. Feb 9 19:25:22.723387 systemd[1]: Hostname set to . Feb 9 19:25:22.723399 systemd[1]: Initializing machine ID from VM UUID. Feb 9 19:25:22.723412 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 19:25:22.723423 systemd[1]: Populated /etc with preset unit settings. Feb 9 19:25:22.723438 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:25:22.723451 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:25:22.723465 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:25:22.723478 systemd[1]: Queued start job for default target multi-user.target. Feb 9 19:25:22.723491 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 9 19:25:22.723503 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 19:25:22.723517 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 19:25:22.723530 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 9 19:25:22.723542 systemd[1]: Created slice system-getty.slice. Feb 9 19:25:22.723555 systemd[1]: Created slice system-modprobe.slice. Feb 9 19:25:22.723567 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 19:25:22.723579 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 19:25:22.723591 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 19:25:22.723604 systemd[1]: Created slice user.slice. Feb 9 19:25:22.723618 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:25:22.723632 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 19:25:22.723644 systemd[1]: Set up automount boot.automount. Feb 9 19:25:22.723656 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 19:25:22.723668 systemd[1]: Reached target integritysetup.target. Feb 9 19:25:22.723679 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:25:22.723691 systemd[1]: Reached target remote-fs.target. Feb 9 19:25:22.723702 systemd[1]: Reached target slices.target. Feb 9 19:25:22.723715 systemd[1]: Reached target swap.target. Feb 9 19:25:22.723726 systemd[1]: Reached target torcx.target. Feb 9 19:25:22.723738 systemd[1]: Reached target veritysetup.target. Feb 9 19:25:22.723749 systemd[1]: Listening on systemd-coredump.socket. Feb 9 19:25:22.723761 systemd[1]: Listening on systemd-initctl.socket. Feb 9 19:25:22.723773 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:25:22.723785 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:25:22.723797 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:25:22.723808 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:25:22.723819 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:25:22.723831 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:25:22.723843 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 19:25:22.723854 systemd[1]: Mounting dev-hugepages.mount... Feb 9 19:25:22.723866 systemd[1]: Mounting dev-mqueue.mount... Feb 9 19:25:22.723877 systemd[1]: Mounting media.mount... Feb 9 19:25:22.723889 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:25:22.723901 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 19:25:22.723912 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 19:25:22.723923 systemd[1]: Mounting tmp.mount... Feb 9 19:25:22.723936 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 19:25:22.723948 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 19:25:22.723959 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:25:22.723971 systemd[1]: Starting modprobe@configfs.service... Feb 9 19:25:22.723983 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 19:25:22.723994 systemd[1]: Starting modprobe@drm.service... Feb 9 19:25:22.724014 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 19:25:22.724042 systemd[1]: Starting modprobe@fuse.service... Feb 9 19:25:22.724061 systemd[1]: Starting modprobe@loop.service... Feb 9 19:25:22.724083 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 19:25:22.724096 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 9 19:25:22.724107 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 9 19:25:22.724119 systemd[1]: Starting systemd-journald.service... Feb 9 19:25:22.724132 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:25:22.724144 systemd[1]: Starting systemd-network-generator.service... Feb 9 19:25:22.724157 systemd[1]: Starting systemd-remount-fs.service... Feb 9 19:25:22.724173 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:25:22.724185 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:25:22.724201 systemd[1]: Mounted dev-hugepages.mount. Feb 9 19:25:22.724213 systemd[1]: Mounted dev-mqueue.mount. Feb 9 19:25:22.724252 systemd[1]: Mounted media.mount. Feb 9 19:25:22.724269 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 19:25:22.724281 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 19:25:22.724294 systemd[1]: Mounted tmp.mount. Feb 9 19:25:22.724306 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:25:22.724318 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 19:25:22.724330 systemd[1]: Finished modprobe@configfs.service. Feb 9 19:25:22.724345 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 19:25:22.724356 kernel: fuse: init (API version 7.34) Feb 9 19:25:22.724368 kernel: loop: module loaded Feb 9 19:25:22.724379 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 19:25:22.724392 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 19:25:22.724403 systemd[1]: Finished modprobe@drm.service. Feb 9 19:25:22.724415 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 19:25:22.724430 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 19:25:22.724461 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 19:25:22.724492 systemd[1]: Finished modprobe@fuse.service. Feb 9 19:25:22.724508 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 19:25:22.724522 systemd[1]: Finished modprobe@loop.service. Feb 9 19:25:22.724536 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:25:22.724550 systemd[1]: Finished systemd-network-generator.service. Feb 9 19:25:22.724566 systemd[1]: Finished systemd-remount-fs.service. Feb 9 19:25:22.724580 systemd[1]: Reached target network-pre.target. Feb 9 19:25:22.724594 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 19:25:22.724608 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 19:25:22.724627 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 19:25:22.724645 systemd-journald[951]: Journal started Feb 9 19:25:22.724691 systemd-journald[951]: Runtime Journal (/run/log/journal/6ee487d2db1d480d9fa62531544af32f) is 4.9M, max 39.5M, 34.5M free. Feb 9 19:25:22.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:22.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:22.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:22.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:22.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:22.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:22.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:22.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:22.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:22.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:22.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:22.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:22.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:22.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:22.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:22.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:22.721000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 19:25:22.721000 audit[951]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffcba0cb340 a2=4000 a3=7ffcba0cb3dc items=0 ppid=1 pid=951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:22.721000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 19:25:22.745430 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 19:25:22.745492 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 19:25:22.745510 systemd[1]: Starting systemd-random-seed.service... Feb 9 19:25:22.745530 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 19:25:22.748793 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:25:22.756251 systemd[1]: Started systemd-journald.service. Feb 9 19:25:22.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:22.754553 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 19:25:22.755281 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 19:25:22.758420 systemd[1]: Starting systemd-journal-flush.service... Feb 9 19:25:22.771708 systemd-journald[951]: Time spent on flushing to /var/log/journal/6ee487d2db1d480d9fa62531544af32f is 24.885ms for 1067 entries. Feb 9 19:25:22.771708 systemd-journald[951]: System Journal (/var/log/journal/6ee487d2db1d480d9fa62531544af32f) is 8.0M, max 584.8M, 576.8M free. Feb 9 19:25:23.006535 systemd-journald[951]: Received client request to flush runtime journal. Feb 9 19:25:22.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:22.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:22.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:22.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:22.803436 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:25:22.806747 systemd[1]: Starting systemd-udev-settle.service... Feb 9 19:25:23.006969 udevadm[1001]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 19:25:22.824109 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 19:25:22.825965 systemd[1]: Starting systemd-sysusers.service... Feb 9 19:25:22.919037 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:25:22.921349 systemd[1]: Finished systemd-random-seed.service. Feb 9 19:25:22.922634 systemd[1]: Reached target first-boot-complete.target. Feb 9 19:25:23.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:23.008054 systemd[1]: Finished systemd-journal-flush.service. Feb 9 19:25:23.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:23.009898 systemd[1]: Finished systemd-sysusers.service. Feb 9 19:25:23.011681 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:25:23.051747 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:25:23.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:23.503078 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 19:25:23.516451 kernel: kauditd_printk_skb: 77 callbacks suppressed Feb 9 19:25:23.516588 kernel: audit: type=1130 audit(1707506723.504:116): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:23.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:23.507058 systemd[1]: Starting systemd-udevd.service... Feb 9 19:25:23.561540 systemd-udevd[1016]: Using default interface naming scheme 'v252'. Feb 9 19:25:23.614413 systemd[1]: Started systemd-udevd.service. Feb 9 19:25:23.626427 kernel: audit: type=1130 audit(1707506723.615:117): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:23.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:23.619085 systemd[1]: Starting systemd-networkd.service... Feb 9 19:25:23.647957 systemd[1]: Starting systemd-userdbd.service... Feb 9 19:25:23.692563 systemd[1]: Started systemd-userdbd.service. Feb 9 19:25:23.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:23.699277 kernel: audit: type=1130 audit(1707506723.692:118): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:23.717419 systemd[1]: Found device dev-ttyS0.device. Feb 9 19:25:23.790910 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:25:23.797326 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 9 19:25:23.801582 kernel: ACPI: button: Power Button [PWRF] Feb 9 19:25:23.818124 systemd-networkd[1018]: lo: Link UP Feb 9 19:25:23.818134 systemd-networkd[1018]: lo: Gained carrier Feb 9 19:25:23.818631 systemd-networkd[1018]: Enumeration completed Feb 9 19:25:23.818773 systemd[1]: Started systemd-networkd.service. Feb 9 19:25:23.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:23.819649 systemd-networkd[1018]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:25:23.824444 kernel: audit: type=1130 audit(1707506723.819:119): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:23.824521 systemd-networkd[1018]: eth0: Link UP Feb 9 19:25:23.824527 systemd-networkd[1018]: eth0: Gained carrier Feb 9 19:25:23.835350 systemd-networkd[1018]: eth0: DHCPv4 address 172.24.4.217/24, gateway 172.24.4.1 acquired from 172.24.4.1 Feb 9 19:25:23.833000 audit[1019]: AVC avc: denied { confidentiality } for pid=1019 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 19:25:23.861134 kernel: audit: type=1400 audit(1707506723.833:120): avc: denied { confidentiality } for pid=1019 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 19:25:23.861194 kernel: audit: type=1300 audit(1707506723.833:120): arch=c000003e syscall=175 success=yes exit=0 a0=564c3d3ca0a0 a1=32194 a2=7f1e1294ebc5 a3=5 items=108 ppid=1016 pid=1019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:23.833000 audit[1019]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=564c3d3ca0a0 a1=32194 a2=7f1e1294ebc5 a3=5 items=108 ppid=1016 pid=1019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:23.833000 audit: CWD cwd="/" Feb 9 19:25:23.833000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.871291 kernel: audit: type=1307 audit(1707506723.833:120): cwd="/" Feb 9 19:25:23.871336 kernel: audit: type=1302 audit(1707506723.833:120): item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.871356 kernel: audit: type=1302 audit(1707506723.833:120): item=1 name=(null) inode=13844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=1 name=(null) inode=13844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=2 name=(null) inode=13844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.878034 kernel: audit: type=1302 audit(1707506723.833:120): item=2 name=(null) inode=13844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=3 name=(null) inode=13845 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=4 name=(null) inode=13844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=5 name=(null) inode=13846 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=6 name=(null) inode=13844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=7 name=(null) inode=13847 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=8 name=(null) inode=13847 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=9 name=(null) inode=13848 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=10 name=(null) inode=13847 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=11 name=(null) inode=13849 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=12 name=(null) inode=13847 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=13 name=(null) inode=13850 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=14 name=(null) inode=13847 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=15 name=(null) inode=13851 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=16 name=(null) inode=13847 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=17 name=(null) inode=13852 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=18 name=(null) inode=13844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=19 name=(null) inode=13853 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=20 name=(null) inode=13853 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=21 name=(null) inode=13854 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=22 name=(null) inode=13853 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=23 name=(null) inode=13855 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=24 name=(null) inode=13853 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=25 name=(null) inode=13856 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=26 name=(null) inode=13853 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=27 name=(null) inode=13857 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=28 name=(null) inode=13853 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=29 name=(null) inode=13858 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=30 name=(null) inode=13844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=31 name=(null) inode=13859 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=32 name=(null) inode=13859 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=33 name=(null) inode=13860 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=34 name=(null) inode=13859 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=35 name=(null) inode=13861 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=36 name=(null) inode=13859 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=37 name=(null) inode=13862 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=38 name=(null) inode=13859 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=39 name=(null) inode=13863 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=40 name=(null) inode=13859 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=41 name=(null) inode=13864 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=42 name=(null) inode=13844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=43 name=(null) inode=13865 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=44 name=(null) inode=13865 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=45 name=(null) inode=13866 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=46 name=(null) inode=13865 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=47 name=(null) inode=13867 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=48 name=(null) inode=13865 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=49 name=(null) inode=13868 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=50 name=(null) inode=13865 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=51 name=(null) inode=13869 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=52 name=(null) inode=13865 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=53 name=(null) inode=13870 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=55 name=(null) inode=13871 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=56 name=(null) inode=13871 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=57 name=(null) inode=13872 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=58 name=(null) inode=13871 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=59 name=(null) inode=13873 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=60 name=(null) inode=13871 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=61 name=(null) inode=13874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=62 name=(null) inode=13874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=63 name=(null) inode=13875 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=64 name=(null) inode=13874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=65 name=(null) inode=13876 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=66 name=(null) inode=13874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=67 name=(null) inode=13877 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=68 name=(null) inode=13874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=69 name=(null) inode=13878 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=70 name=(null) inode=13874 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=71 name=(null) inode=13879 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=72 name=(null) inode=13871 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=73 name=(null) inode=13880 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=74 name=(null) inode=13880 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=75 name=(null) inode=13881 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=76 name=(null) inode=13880 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=77 name=(null) inode=13882 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=78 name=(null) inode=13880 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=79 name=(null) inode=13883 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=80 name=(null) inode=13880 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=81 name=(null) inode=13884 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=82 name=(null) inode=13880 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=83 name=(null) inode=13885 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=84 name=(null) inode=13871 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=85 name=(null) inode=13886 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=86 name=(null) inode=13886 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=87 name=(null) inode=13887 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=88 name=(null) inode=13886 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=89 name=(null) inode=13888 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=90 name=(null) inode=13886 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=91 name=(null) inode=13889 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=92 name=(null) inode=13886 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=93 name=(null) inode=13890 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=94 name=(null) inode=13886 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=95 name=(null) inode=13891 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=96 name=(null) inode=13871 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=97 name=(null) inode=13892 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=98 name=(null) inode=13892 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=99 name=(null) inode=13893 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=100 name=(null) inode=13892 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=101 name=(null) inode=13894 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=102 name=(null) inode=13892 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=103 name=(null) inode=13895 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=104 name=(null) inode=13892 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=105 name=(null) inode=13896 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=106 name=(null) inode=13892 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PATH item=107 name=(null) inode=13897 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:23.833000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 19:25:23.883279 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 9 19:25:23.891269 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Feb 9 19:25:23.894252 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 19:25:23.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:23.939684 systemd[1]: Finished systemd-udev-settle.service. Feb 9 19:25:23.941495 systemd[1]: Starting lvm2-activation-early.service... Feb 9 19:25:23.967675 lvm[1047]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:25:23.993080 systemd[1]: Finished lvm2-activation-early.service. Feb 9 19:25:23.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:23.993663 systemd[1]: Reached target cryptsetup.target. Feb 9 19:25:23.995712 systemd[1]: Starting lvm2-activation.service... Feb 9 19:25:24.006349 lvm[1049]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:25:24.043291 systemd[1]: Finished lvm2-activation.service. Feb 9 19:25:24.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:24.044656 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:25:24.045756 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 19:25:24.045813 systemd[1]: Reached target local-fs.target. Feb 9 19:25:24.046924 systemd[1]: Reached target machines.target. Feb 9 19:25:24.050775 systemd[1]: Starting ldconfig.service... Feb 9 19:25:24.053686 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 19:25:24.053806 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:25:24.058289 systemd[1]: Starting systemd-boot-update.service... Feb 9 19:25:24.061902 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 19:25:24.069837 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 19:25:24.071369 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:25:24.071475 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:25:24.074612 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 19:25:24.093503 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1052 (bootctl) Feb 9 19:25:24.095953 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 19:25:24.139570 systemd-tmpfiles[1055]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 19:25:24.144025 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 19:25:24.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:24.148808 systemd-tmpfiles[1055]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 19:25:24.183183 systemd-tmpfiles[1055]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 19:25:24.952633 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 19:25:24.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:24.956023 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 19:25:25.116295 systemd-fsck[1061]: fsck.fat 4.2 (2021-01-31) Feb 9 19:25:25.116295 systemd-fsck[1061]: /dev/vda1: 789 files, 115339/258078 clusters Feb 9 19:25:25.119805 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 19:25:25.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:25.123958 systemd[1]: Mounting boot.mount... Feb 9 19:25:25.151900 systemd[1]: Mounted boot.mount. Feb 9 19:25:25.182358 systemd[1]: Finished systemd-boot-update.service. Feb 9 19:25:25.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:25.258354 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 19:25:25.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:25.261521 systemd[1]: Starting audit-rules.service... Feb 9 19:25:25.264807 systemd[1]: Starting clean-ca-certificates.service... Feb 9 19:25:25.267843 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 19:25:25.275216 systemd[1]: Starting systemd-resolved.service... Feb 9 19:25:25.283629 systemd[1]: Starting systemd-timesyncd.service... Feb 9 19:25:25.290565 systemd[1]: Starting systemd-update-utmp.service... Feb 9 19:25:25.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:25.294899 systemd[1]: Finished clean-ca-certificates.service. Feb 9 19:25:25.297962 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 19:25:25.306000 audit[1081]: SYSTEM_BOOT pid=1081 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 19:25:25.310140 systemd[1]: Finished systemd-update-utmp.service. Feb 9 19:25:25.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:25.330739 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 19:25:25.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:25.357000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 19:25:25.357000 audit[1092]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc91ecc5d0 a2=420 a3=0 items=0 ppid=1069 pid=1092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:25.357000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 19:25:25.359203 augenrules[1092]: No rules Feb 9 19:25:25.358864 systemd[1]: Finished audit-rules.service. Feb 9 19:25:25.413442 systemd-resolved[1075]: Positive Trust Anchors: Feb 9 19:25:25.413910 systemd-resolved[1075]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:25:25.414065 systemd-resolved[1075]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:25:25.416533 systemd[1]: Started systemd-timesyncd.service. Feb 9 19:25:25.417162 systemd[1]: Reached target time-set.target. Feb 9 19:25:25.425562 systemd-resolved[1075]: Using system hostname 'ci-3510-3-2-b-76a749f546.novalocal'. Feb 9 19:25:25.427584 systemd[1]: Started systemd-resolved.service. Feb 9 19:25:25.428114 systemd[1]: Reached target network.target. Feb 9 19:25:25.428594 systemd[1]: Reached target nss-lookup.target. Feb 9 19:25:26.026771 systemd-resolved[1075]: Clock change detected. Flushing caches. Feb 9 19:25:26.027152 systemd-timesyncd[1080]: Contacted time server 5.39.80.51:123 (0.flatcar.pool.ntp.org). Feb 9 19:25:26.027283 systemd-timesyncd[1080]: Initial clock synchronization to Fri 2024-02-09 19:25:26.026716 UTC. Feb 9 19:25:26.138990 ldconfig[1051]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 19:25:26.152873 systemd[1]: Finished ldconfig.service. Feb 9 19:25:26.157170 systemd[1]: Starting systemd-update-done.service... Feb 9 19:25:26.173029 systemd[1]: Finished systemd-update-done.service. Feb 9 19:25:26.174421 systemd[1]: Reached target sysinit.target. Feb 9 19:25:26.175764 systemd[1]: Started motdgen.path. Feb 9 19:25:26.176887 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 19:25:26.177568 systemd-networkd[1018]: eth0: Gained IPv6LL Feb 9 19:25:26.178771 systemd[1]: Started logrotate.timer. Feb 9 19:25:26.180235 systemd[1]: Started mdadm.timer. Feb 9 19:25:26.181322 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 19:25:26.182589 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 19:25:26.182659 systemd[1]: Reached target paths.target. Feb 9 19:25:26.183726 systemd[1]: Reached target timers.target. Feb 9 19:25:26.185705 systemd[1]: Listening on dbus.socket. Feb 9 19:25:26.189103 systemd[1]: Starting docker.socket... Feb 9 19:25:26.193329 systemd[1]: Listening on sshd.socket. Feb 9 19:25:26.194636 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:25:26.195579 systemd[1]: Listening on docker.socket. Feb 9 19:25:26.196682 systemd[1]: Reached target sockets.target. Feb 9 19:25:26.197767 systemd[1]: Reached target basic.target. Feb 9 19:25:26.199202 systemd[1]: System is tainted: cgroupsv1 Feb 9 19:25:26.199328 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:25:26.199386 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:25:26.201752 systemd[1]: Starting containerd.service... Feb 9 19:25:26.204974 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 9 19:25:26.208534 systemd[1]: Starting dbus.service... Feb 9 19:25:26.214324 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 19:25:26.217813 systemd[1]: Starting extend-filesystems.service... Feb 9 19:25:26.221252 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 19:25:26.225466 systemd[1]: Starting motdgen.service... Feb 9 19:25:26.303062 jq[1110]: false Feb 9 19:25:26.227493 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 19:25:26.231007 systemd[1]: Starting prepare-critools.service... Feb 9 19:25:26.232906 systemd[1]: Starting prepare-helm.service... Feb 9 19:25:26.236418 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 19:25:26.237812 systemd[1]: Starting sshd-keygen.service... Feb 9 19:25:26.245076 systemd[1]: Starting systemd-logind.service... Feb 9 19:25:26.303937 tar[1126]: ./ Feb 9 19:25:26.303937 tar[1126]: ./macvlan Feb 9 19:25:26.245560 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:25:26.245631 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 19:25:26.246808 systemd[1]: Starting update-engine.service... Feb 9 19:25:26.248435 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 19:25:26.255262 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 19:25:26.304809 tar[1128]: crictl Feb 9 19:25:26.255483 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 19:25:26.320887 jq[1124]: true Feb 9 19:25:26.292097 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 19:25:26.321122 tar[1130]: linux-amd64/helm Feb 9 19:25:26.292496 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 19:25:26.321452 jq[1137]: true Feb 9 19:25:26.327986 extend-filesystems[1111]: Found vda Feb 9 19:25:26.327986 extend-filesystems[1111]: Found vda1 Feb 9 19:25:26.327986 extend-filesystems[1111]: Found vda2 Feb 9 19:25:26.327986 extend-filesystems[1111]: Found vda3 Feb 9 19:25:26.327986 extend-filesystems[1111]: Found usr Feb 9 19:25:26.327986 extend-filesystems[1111]: Found vda4 Feb 9 19:25:26.327986 extend-filesystems[1111]: Found vda6 Feb 9 19:25:26.327986 extend-filesystems[1111]: Found vda7 Feb 9 19:25:26.327986 extend-filesystems[1111]: Found vda9 Feb 9 19:25:26.327986 extend-filesystems[1111]: Checking size of /dev/vda9 Feb 9 19:25:26.379367 extend-filesystems[1111]: Resized partition /dev/vda9 Feb 9 19:25:26.376358 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 19:25:26.386229 dbus-daemon[1107]: [system] SELinux support is enabled Feb 9 19:25:26.376583 systemd[1]: Finished motdgen.service. Feb 9 19:25:26.386386 systemd[1]: Started dbus.service. Feb 9 19:25:26.388646 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 19:25:26.388679 systemd[1]: Reached target system-config.target. Feb 9 19:25:26.389182 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 19:25:26.389204 systemd[1]: Reached target user-config.target. Feb 9 19:25:26.399855 extend-filesystems[1173]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 19:25:26.429927 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Feb 9 19:25:26.454880 update_engine[1123]: I0209 19:25:26.453675 1123 main.cc:92] Flatcar Update Engine starting Feb 9 19:25:26.500259 update_engine[1123]: I0209 19:25:26.461282 1123 update_check_scheduler.cc:74] Next update check in 4m54s Feb 9 19:25:26.461222 systemd[1]: Started update-engine.service. Feb 9 19:25:26.463483 systemd[1]: Started locksmithd.service. Feb 9 19:25:26.501784 systemd-logind[1122]: Watching system buttons on /dev/input/event1 (Power Button) Feb 9 19:25:26.501811 systemd-logind[1122]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 19:25:26.502672 systemd-logind[1122]: New seat seat0. Feb 9 19:25:26.509223 env[1135]: time="2024-02-09T19:25:26.507034083Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 19:25:26.509450 bash[1170]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:25:26.509484 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 19:25:26.510800 systemd[1]: Started systemd-logind.service. Feb 9 19:25:26.517267 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Feb 9 19:25:26.582558 coreos-metadata[1105]: Feb 09 19:25:26.522 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Feb 9 19:25:26.582929 env[1135]: time="2024-02-09T19:25:26.543656789Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 19:25:26.582929 env[1135]: time="2024-02-09T19:25:26.580239630Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:25:26.582999 extend-filesystems[1173]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 9 19:25:26.582999 extend-filesystems[1173]: old_desc_blocks = 1, new_desc_blocks = 3 Feb 9 19:25:26.582999 extend-filesystems[1173]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Feb 9 19:25:26.593996 extend-filesystems[1111]: Resized filesystem in /dev/vda9 Feb 9 19:25:26.595004 env[1135]: time="2024-02-09T19:25:26.585126065Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:25:26.595004 env[1135]: time="2024-02-09T19:25:26.585154989Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:25:26.595004 env[1135]: time="2024-02-09T19:25:26.585403906Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:25:26.595004 env[1135]: time="2024-02-09T19:25:26.585423793Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 19:25:26.595004 env[1135]: time="2024-02-09T19:25:26.585437920Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 19:25:26.595004 env[1135]: time="2024-02-09T19:25:26.585449461Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 19:25:26.595004 env[1135]: time="2024-02-09T19:25:26.585524783Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:25:26.595004 env[1135]: time="2024-02-09T19:25:26.585771054Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:25:26.595004 env[1135]: time="2024-02-09T19:25:26.588180383Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:25:26.595004 env[1135]: time="2024-02-09T19:25:26.588199920Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 19:25:26.583701 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 19:25:26.597148 env[1135]: time="2024-02-09T19:25:26.588252769Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 19:25:26.597148 env[1135]: time="2024-02-09T19:25:26.588266575Z" level=info msg="metadata content store policy set" policy=shared Feb 9 19:25:26.583976 systemd[1]: Finished extend-filesystems.service. Feb 9 19:25:26.605534 env[1135]: time="2024-02-09T19:25:26.605433351Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 19:25:26.605534 env[1135]: time="2024-02-09T19:25:26.605487142Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 19:25:26.605534 env[1135]: time="2024-02-09T19:25:26.605504314Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 19:25:26.605662 env[1135]: time="2024-02-09T19:25:26.605548427Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 19:25:26.605662 env[1135]: time="2024-02-09T19:25:26.605575969Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 19:25:26.605662 env[1135]: time="2024-02-09T19:25:26.605602338Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 19:25:26.605662 env[1135]: time="2024-02-09T19:25:26.605618057Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 19:25:26.605662 env[1135]: time="2024-02-09T19:25:26.605634067Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 19:25:26.605662 env[1135]: time="2024-02-09T19:25:26.605650128Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 19:25:26.605818 env[1135]: time="2024-02-09T19:25:26.605671317Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 19:25:26.605818 env[1135]: time="2024-02-09T19:25:26.605686736Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 19:25:26.605818 env[1135]: time="2024-02-09T19:25:26.605701404Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 19:25:26.605912 env[1135]: time="2024-02-09T19:25:26.605824435Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 19:25:26.606138 env[1135]: time="2024-02-09T19:25:26.606106293Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 19:25:26.606538 env[1135]: time="2024-02-09T19:25:26.606505161Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 19:25:26.606590 env[1135]: time="2024-02-09T19:25:26.606540327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 19:25:26.606590 env[1135]: time="2024-02-09T19:25:26.606557740Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 19:25:26.606641 env[1135]: time="2024-02-09T19:25:26.606602514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 19:25:26.606641 env[1135]: time="2024-02-09T19:25:26.606618814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 19:25:26.606641 env[1135]: time="2024-02-09T19:25:26.606633973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 19:25:26.606711 env[1135]: time="2024-02-09T19:25:26.606651696Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 19:25:26.606711 env[1135]: time="2024-02-09T19:25:26.606667756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 19:25:26.606711 env[1135]: time="2024-02-09T19:25:26.606682203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 19:25:26.606711 env[1135]: time="2024-02-09T19:25:26.606696540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 19:25:26.606802 env[1135]: time="2024-02-09T19:25:26.606709925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 19:25:26.606802 env[1135]: time="2024-02-09T19:25:26.606725925Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 19:25:26.606909 env[1135]: time="2024-02-09T19:25:26.606867571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 19:25:26.606954 env[1135]: time="2024-02-09T19:25:26.606925870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 19:25:26.606954 env[1135]: time="2024-02-09T19:25:26.606943914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 19:25:26.607021 env[1135]: time="2024-02-09T19:25:26.606959343Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 19:25:26.607021 env[1135]: time="2024-02-09T19:25:26.606977267Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 19:25:26.607021 env[1135]: time="2024-02-09T19:25:26.606991073Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 19:25:26.607021 env[1135]: time="2024-02-09T19:25:26.607010489Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 19:25:26.607117 env[1135]: time="2024-02-09T19:25:26.607050193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 19:25:26.607350 env[1135]: time="2024-02-09T19:25:26.607280075Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 19:25:26.613668 env[1135]: time="2024-02-09T19:25:26.607357460Z" level=info msg="Connect containerd service" Feb 9 19:25:26.613668 env[1135]: time="2024-02-09T19:25:26.607394309Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 19:25:26.613668 env[1135]: time="2024-02-09T19:25:26.611065705Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:25:26.613793 env[1135]: time="2024-02-09T19:25:26.613770567Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 19:25:26.615538 env[1135]: time="2024-02-09T19:25:26.613821102Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 19:25:26.615538 env[1135]: time="2024-02-09T19:25:26.613883640Z" level=info msg="Start subscribing containerd event" Feb 9 19:25:26.615538 env[1135]: time="2024-02-09T19:25:26.613966705Z" level=info msg="Start recovering state" Feb 9 19:25:26.615538 env[1135]: time="2024-02-09T19:25:26.614035815Z" level=info msg="Start event monitor" Feb 9 19:25:26.615538 env[1135]: time="2024-02-09T19:25:26.614120834Z" level=info msg="Start snapshots syncer" Feb 9 19:25:26.615538 env[1135]: time="2024-02-09T19:25:26.614133418Z" level=info msg="Start cni network conf syncer for default" Feb 9 19:25:26.615538 env[1135]: time="2024-02-09T19:25:26.614143988Z" level=info msg="Start streaming server" Feb 9 19:25:26.614001 systemd[1]: Started containerd.service. Feb 9 19:25:26.626854 tar[1126]: ./static Feb 9 19:25:26.642036 env[1135]: time="2024-02-09T19:25:26.641961329Z" level=info msg="containerd successfully booted in 0.203518s" Feb 9 19:25:26.717347 tar[1126]: ./vlan Feb 9 19:25:26.792677 coreos-metadata[1105]: Feb 09 19:25:26.792 INFO Fetch successful Feb 9 19:25:26.792677 coreos-metadata[1105]: Feb 09 19:25:26.792 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 9 19:25:26.805261 coreos-metadata[1105]: Feb 09 19:25:26.805 INFO Fetch successful Feb 9 19:25:26.810287 unknown[1105]: wrote ssh authorized keys file for user: core Feb 9 19:25:26.812937 tar[1126]: ./portmap Feb 9 19:25:26.838475 update-ssh-keys[1186]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:25:26.838802 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 9 19:25:26.918066 tar[1126]: ./host-local Feb 9 19:25:27.006072 tar[1126]: ./vrf Feb 9 19:25:27.092288 tar[1126]: ./bridge Feb 9 19:25:27.113133 systemd[1]: Finished prepare-critools.service. Feb 9 19:25:27.179102 tar[1126]: ./tuning Feb 9 19:25:27.231616 tar[1126]: ./firewall Feb 9 19:25:27.318414 tar[1126]: ./host-device Feb 9 19:25:27.381535 tar[1126]: ./sbr Feb 9 19:25:27.423406 tar[1130]: linux-amd64/LICENSE Feb 9 19:25:27.423862 tar[1130]: linux-amd64/README.md Feb 9 19:25:27.431002 systemd[1]: Finished prepare-helm.service. Feb 9 19:25:27.435993 tar[1126]: ./loopback Feb 9 19:25:27.466054 tar[1126]: ./dhcp Feb 9 19:25:27.560786 tar[1126]: ./ptp Feb 9 19:25:27.599576 tar[1126]: ./ipvlan Feb 9 19:25:27.637996 tar[1126]: ./bandwidth Feb 9 19:25:27.769176 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 19:25:27.779308 locksmithd[1178]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 19:25:27.782891 systemd[1]: Created slice system-sshd.slice. Feb 9 19:25:28.600280 sshd_keygen[1160]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 19:25:28.624491 systemd[1]: Finished sshd-keygen.service. Feb 9 19:25:28.626643 systemd[1]: Starting issuegen.service... Feb 9 19:25:28.628175 systemd[1]: Started sshd@0-172.24.4.217:22-172.24.4.1:59872.service. Feb 9 19:25:28.636073 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 19:25:28.636286 systemd[1]: Finished issuegen.service. Feb 9 19:25:28.638174 systemd[1]: Starting systemd-user-sessions.service... Feb 9 19:25:28.655792 systemd[1]: Finished systemd-user-sessions.service. Feb 9 19:25:28.658045 systemd[1]: Started getty@tty1.service. Feb 9 19:25:28.659814 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 19:25:28.660558 systemd[1]: Reached target getty.target. Feb 9 19:25:28.661443 systemd[1]: Reached target multi-user.target. Feb 9 19:25:28.663333 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 19:25:28.673550 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 19:25:28.673798 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 19:25:28.685666 systemd[1]: Startup finished in 13.014s (kernel) + 9.276s (userspace) = 22.290s. Feb 9 19:25:29.872173 sshd[1214]: Accepted publickey for core from 172.24.4.1 port 59872 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:25:29.875775 sshd[1214]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:25:29.903986 systemd-logind[1122]: New session 1 of user core. Feb 9 19:25:29.907265 systemd[1]: Created slice user-500.slice. Feb 9 19:25:29.909444 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 19:25:29.930857 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 19:25:29.934084 systemd[1]: Starting user@500.service... Feb 9 19:25:29.944142 (systemd)[1228]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:25:30.063701 systemd[1228]: Queued start job for default target default.target. Feb 9 19:25:30.064236 systemd[1228]: Reached target paths.target. Feb 9 19:25:30.064348 systemd[1228]: Reached target sockets.target. Feb 9 19:25:30.064484 systemd[1228]: Reached target timers.target. Feb 9 19:25:30.064576 systemd[1228]: Reached target basic.target. Feb 9 19:25:30.064692 systemd[1228]: Reached target default.target. Feb 9 19:25:30.064804 systemd[1228]: Startup finished in 108ms. Feb 9 19:25:30.065036 systemd[1]: Started user@500.service. Feb 9 19:25:30.067002 systemd[1]: Started session-1.scope. Feb 9 19:25:30.585978 systemd[1]: Started sshd@1-172.24.4.217:22-172.24.4.1:59888.service. Feb 9 19:25:31.914511 sshd[1237]: Accepted publickey for core from 172.24.4.1 port 59888 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:25:31.917204 sshd[1237]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:25:31.928348 systemd-logind[1122]: New session 2 of user core. Feb 9 19:25:31.929281 systemd[1]: Started session-2.scope. Feb 9 19:25:32.717034 sshd[1237]: pam_unix(sshd:session): session closed for user core Feb 9 19:25:32.722423 systemd[1]: Started sshd@2-172.24.4.217:22-172.24.4.1:59894.service. Feb 9 19:25:32.727684 systemd[1]: sshd@1-172.24.4.217:22-172.24.4.1:59888.service: Deactivated successfully. Feb 9 19:25:32.730609 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 19:25:32.732208 systemd-logind[1122]: Session 2 logged out. Waiting for processes to exit. Feb 9 19:25:32.734804 systemd-logind[1122]: Removed session 2. Feb 9 19:25:34.003185 sshd[1242]: Accepted publickey for core from 172.24.4.1 port 59894 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:25:34.006333 sshd[1242]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:25:34.016015 systemd-logind[1122]: New session 3 of user core. Feb 9 19:25:34.016718 systemd[1]: Started session-3.scope. Feb 9 19:25:34.806780 sshd[1242]: pam_unix(sshd:session): session closed for user core Feb 9 19:25:34.811718 systemd[1]: Started sshd@3-172.24.4.217:22-172.24.4.1:55260.service. Feb 9 19:25:34.817993 systemd[1]: sshd@2-172.24.4.217:22-172.24.4.1:59894.service: Deactivated successfully. Feb 9 19:25:34.826792 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 19:25:34.827265 systemd-logind[1122]: Session 3 logged out. Waiting for processes to exit. Feb 9 19:25:34.830522 systemd-logind[1122]: Removed session 3. Feb 9 19:25:36.093711 sshd[1249]: Accepted publickey for core from 172.24.4.1 port 55260 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:25:36.096973 sshd[1249]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:25:36.107264 systemd[1]: Started session-4.scope. Feb 9 19:25:36.108212 systemd-logind[1122]: New session 4 of user core. Feb 9 19:25:36.761453 sshd[1249]: pam_unix(sshd:session): session closed for user core Feb 9 19:25:36.764367 systemd[1]: Started sshd@4-172.24.4.217:22-172.24.4.1:55276.service. Feb 9 19:25:36.770336 systemd[1]: sshd@3-172.24.4.217:22-172.24.4.1:55260.service: Deactivated successfully. Feb 9 19:25:36.776386 systemd-logind[1122]: Session 4 logged out. Waiting for processes to exit. Feb 9 19:25:36.776542 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 19:25:36.780140 systemd-logind[1122]: Removed session 4. Feb 9 19:25:38.052559 sshd[1256]: Accepted publickey for core from 172.24.4.1 port 55276 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:25:38.054886 sshd[1256]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:25:38.063568 systemd-logind[1122]: New session 5 of user core. Feb 9 19:25:38.064179 systemd[1]: Started session-5.scope. Feb 9 19:25:38.586055 sudo[1262]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 9 19:25:38.587106 sudo[1262]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:25:38.597159 dbus-daemon[1107]: \xd0m\xf0\x86\x87U: received setenforce notice (enforcing=-2021248080) Feb 9 19:25:38.600699 sudo[1262]: pam_unix(sudo:session): session closed for user root Feb 9 19:25:38.763215 sshd[1256]: pam_unix(sshd:session): session closed for user core Feb 9 19:25:38.763876 systemd[1]: Started sshd@5-172.24.4.217:22-172.24.4.1:55280.service. Feb 9 19:25:38.770095 systemd[1]: sshd@4-172.24.4.217:22-172.24.4.1:55276.service: Deactivated successfully. Feb 9 19:25:38.771686 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 19:25:38.774516 systemd-logind[1122]: Session 5 logged out. Waiting for processes to exit. Feb 9 19:25:38.778188 systemd-logind[1122]: Removed session 5. Feb 9 19:25:40.091984 sshd[1264]: Accepted publickey for core from 172.24.4.1 port 55280 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:25:40.095240 sshd[1264]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:25:40.105804 systemd[1]: Started session-6.scope. Feb 9 19:25:40.107149 systemd-logind[1122]: New session 6 of user core. Feb 9 19:25:40.601815 sudo[1271]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 9 19:25:40.602991 sudo[1271]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:25:40.609122 sudo[1271]: pam_unix(sudo:session): session closed for user root Feb 9 19:25:40.619272 sudo[1270]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 9 19:25:40.619725 sudo[1270]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:25:40.641024 systemd[1]: Stopping audit-rules.service... Feb 9 19:25:40.641000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 9 19:25:40.644719 kernel: kauditd_printk_skb: 121 callbacks suppressed Feb 9 19:25:40.644886 kernel: audit: type=1305 audit(1707506740.641:134): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 9 19:25:40.641000 audit[1274]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdd24ade80 a2=420 a3=0 items=0 ppid=1 pid=1274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:40.650084 auditctl[1274]: No rules Feb 9 19:25:40.651148 systemd[1]: audit-rules.service: Deactivated successfully. Feb 9 19:25:40.651604 systemd[1]: Stopped audit-rules.service. Feb 9 19:25:40.655170 systemd[1]: Starting audit-rules.service... Feb 9 19:25:40.660677 kernel: audit: type=1300 audit(1707506740.641:134): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdd24ade80 a2=420 a3=0 items=0 ppid=1 pid=1274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:40.641000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Feb 9 19:25:40.678391 kernel: audit: type=1327 audit(1707506740.641:134): proctitle=2F7362696E2F617564697463746C002D44 Feb 9 19:25:40.678515 kernel: audit: type=1131 audit(1707506740.649:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:40.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:40.707089 augenrules[1292]: No rules Feb 9 19:25:40.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:40.710989 sudo[1270]: pam_unix(sudo:session): session closed for user root Feb 9 19:25:40.708871 systemd[1]: Finished audit-rules.service. Feb 9 19:25:40.708000 audit[1270]: USER_END pid=1270 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:25:40.723571 kernel: audit: type=1130 audit(1707506740.708:136): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:40.723688 kernel: audit: type=1106 audit(1707506740.708:137): pid=1270 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:25:40.723740 kernel: audit: type=1104 audit(1707506740.708:138): pid=1270 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:25:40.708000 audit[1270]: CRED_DISP pid=1270 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:25:40.871232 sshd[1264]: pam_unix(sshd:session): session closed for user core Feb 9 19:25:40.876287 systemd[1]: Started sshd@6-172.24.4.217:22-172.24.4.1:55282.service. Feb 9 19:25:40.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.24.4.217:22-172.24.4.1:55282 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:40.887988 kernel: audit: type=1130 audit(1707506740.875:139): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.24.4.217:22-172.24.4.1:55282 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:40.891392 systemd-logind[1122]: Session 6 logged out. Waiting for processes to exit. Feb 9 19:25:40.892100 systemd[1]: sshd@5-172.24.4.217:22-172.24.4.1:55280.service: Deactivated successfully. Feb 9 19:25:40.893639 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 19:25:40.887000 audit[1264]: USER_END pid=1264 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:25:40.903007 systemd-logind[1122]: Removed session 6. Feb 9 19:25:40.909624 kernel: audit: type=1106 audit(1707506740.887:140): pid=1264 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:25:40.909766 kernel: audit: type=1104 audit(1707506740.887:141): pid=1264 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:25:40.887000 audit[1264]: CRED_DISP pid=1264 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:25:40.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.24.4.217:22-172.24.4.1:55280 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:42.167000 audit[1297]: USER_ACCT pid=1297 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:25:42.168315 sshd[1297]: Accepted publickey for core from 172.24.4.1 port 55282 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:25:42.169000 audit[1297]: CRED_ACQ pid=1297 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:25:42.169000 audit[1297]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcfa0620d0 a2=3 a3=0 items=0 ppid=1 pid=1297 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:42.169000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:25:42.171180 sshd[1297]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:25:42.180547 systemd-logind[1122]: New session 7 of user core. Feb 9 19:25:42.181275 systemd[1]: Started session-7.scope. Feb 9 19:25:42.192000 audit[1297]: USER_START pid=1297 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:25:42.195000 audit[1302]: CRED_ACQ pid=1302 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:25:42.675000 audit[1303]: USER_ACCT pid=1303 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:25:42.677352 sudo[1303]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 19:25:42.677000 audit[1303]: CRED_REFR pid=1303 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:25:42.678556 sudo[1303]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:25:42.681000 audit[1303]: USER_START pid=1303 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:25:43.687686 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:25:43.700452 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:25:43.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:43.702723 systemd[1]: Reached target network-online.target. Feb 9 19:25:43.706839 systemd[1]: Starting docker.service... Feb 9 19:25:43.800834 env[1321]: time="2024-02-09T19:25:43.800686063Z" level=info msg="Starting up" Feb 9 19:25:43.803425 env[1321]: time="2024-02-09T19:25:43.803387770Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 19:25:43.803587 env[1321]: time="2024-02-09T19:25:43.803556266Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 19:25:43.803740 env[1321]: time="2024-02-09T19:25:43.803704033Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 19:25:43.803862 env[1321]: time="2024-02-09T19:25:43.803835109Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 19:25:43.806775 env[1321]: time="2024-02-09T19:25:43.806730630Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 19:25:43.806775 env[1321]: time="2024-02-09T19:25:43.806752260Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 19:25:43.806775 env[1321]: time="2024-02-09T19:25:43.806772608Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 19:25:43.807094 env[1321]: time="2024-02-09T19:25:43.806784992Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 19:25:43.816283 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport313790343-merged.mount: Deactivated successfully. Feb 9 19:25:43.986294 env[1321]: time="2024-02-09T19:25:43.986117139Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 9 19:25:43.986294 env[1321]: time="2024-02-09T19:25:43.986172012Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 9 19:25:43.987926 env[1321]: time="2024-02-09T19:25:43.987834921Z" level=info msg="Loading containers: start." Feb 9 19:25:44.079000 audit[1351]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1351 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:44.079000 audit[1351]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc99bfed30 a2=0 a3=7ffc99bfed1c items=0 ppid=1321 pid=1351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:44.079000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Feb 9 19:25:44.080000 audit[1353]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1353 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:44.080000 audit[1353]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe922a1310 a2=0 a3=7ffe922a12fc items=0 ppid=1321 pid=1353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:44.080000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Feb 9 19:25:44.082000 audit[1355]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1355 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:44.082000 audit[1355]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff0da6c230 a2=0 a3=7fff0da6c21c items=0 ppid=1321 pid=1355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:44.082000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Feb 9 19:25:44.084000 audit[1357]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1357 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:44.084000 audit[1357]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff4b52e150 a2=0 a3=7fff4b52e13c items=0 ppid=1321 pid=1357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:44.084000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Feb 9 19:25:44.087000 audit[1359]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1359 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:44.087000 audit[1359]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc6236a120 a2=0 a3=7ffc6236a10c items=0 ppid=1321 pid=1359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:44.087000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Feb 9 19:25:44.102000 audit[1365]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1365 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:44.102000 audit[1365]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd36ae3ad0 a2=0 a3=7ffd36ae3abc items=0 ppid=1321 pid=1365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:44.102000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Feb 9 19:25:44.127000 audit[1367]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1367 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:44.127000 audit[1367]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd99584900 a2=0 a3=7ffd995848ec items=0 ppid=1321 pid=1367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:44.127000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Feb 9 19:25:44.129000 audit[1369]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1369 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:44.129000 audit[1369]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffe7084cdd0 a2=0 a3=7ffe7084cdbc items=0 ppid=1321 pid=1369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:44.129000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Feb 9 19:25:44.131000 audit[1371]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1371 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:44.131000 audit[1371]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffc6f49b7e0 a2=0 a3=7ffc6f49b7cc items=0 ppid=1321 pid=1371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:44.131000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 9 19:25:44.142000 audit[1375]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1375 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:44.142000 audit[1375]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffe9821ebf0 a2=0 a3=7ffe9821ebdc items=0 ppid=1321 pid=1375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:44.142000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Feb 9 19:25:44.143000 audit[1376]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1376 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:44.143000 audit[1376]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff0f5bdae0 a2=0 a3=7fff0f5bdacc items=0 ppid=1321 pid=1376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:44.143000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 9 19:25:44.165986 kernel: Initializing XFRM netlink socket Feb 9 19:25:44.210060 env[1321]: time="2024-02-09T19:25:44.209972480Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 19:25:44.251000 audit[1384]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1384 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:44.251000 audit[1384]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7fffef677990 a2=0 a3=7fffef67797c items=0 ppid=1321 pid=1384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:44.251000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Feb 9 19:25:44.262000 audit[1387]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1387 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:44.262000 audit[1387]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffed95030d0 a2=0 a3=7ffed95030bc items=0 ppid=1321 pid=1387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:44.262000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Feb 9 19:25:44.266000 audit[1390]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1390 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:44.266000 audit[1390]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffe1a43ae30 a2=0 a3=7ffe1a43ae1c items=0 ppid=1321 pid=1390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:44.266000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Feb 9 19:25:44.269000 audit[1392]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1392 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:44.269000 audit[1392]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fffd2875210 a2=0 a3=7fffd28751fc items=0 ppid=1321 pid=1392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:44.269000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Feb 9 19:25:44.273000 audit[1394]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1394 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:44.273000 audit[1394]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffe5ebf4c90 a2=0 a3=7ffe5ebf4c7c items=0 ppid=1321 pid=1394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:44.273000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Feb 9 19:25:44.275000 audit[1396]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1396 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:44.275000 audit[1396]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7fff4e0087f0 a2=0 a3=7fff4e0087dc items=0 ppid=1321 pid=1396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:44.275000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Feb 9 19:25:44.277000 audit[1398]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1398 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:44.277000 audit[1398]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffd34526d90 a2=0 a3=7ffd34526d7c items=0 ppid=1321 pid=1398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:44.277000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Feb 9 19:25:44.288000 audit[1401]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1401 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:44.288000 audit[1401]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffcddbbe950 a2=0 a3=7ffcddbbe93c items=0 ppid=1321 pid=1401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:44.288000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Feb 9 19:25:44.290000 audit[1403]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1403 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:44.290000 audit[1403]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffc759634d0 a2=0 a3=7ffc759634bc items=0 ppid=1321 pid=1403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:44.290000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Feb 9 19:25:44.292000 audit[1405]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1405 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:44.292000 audit[1405]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffd7d8bbe70 a2=0 a3=7ffd7d8bbe5c items=0 ppid=1321 pid=1405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:44.292000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Feb 9 19:25:44.295000 audit[1407]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1407 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:44.295000 audit[1407]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fffb7c029a0 a2=0 a3=7fffb7c0298c items=0 ppid=1321 pid=1407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:44.295000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Feb 9 19:25:44.297447 systemd-networkd[1018]: docker0: Link UP Feb 9 19:25:44.319000 audit[1411]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1411 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:44.319000 audit[1411]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffec0699b90 a2=0 a3=7ffec0699b7c items=0 ppid=1321 pid=1411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:44.319000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Feb 9 19:25:44.320000 audit[1412]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1412 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:44.320000 audit[1412]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fffc1f81d10 a2=0 a3=7fffc1f81cfc items=0 ppid=1321 pid=1412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:44.320000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 9 19:25:44.322241 env[1321]: time="2024-02-09T19:25:44.322194712Z" level=info msg="Loading containers: done." Feb 9 19:25:44.340488 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3758968172-merged.mount: Deactivated successfully. Feb 9 19:25:44.356503 env[1321]: time="2024-02-09T19:25:44.356435411Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 19:25:44.357198 env[1321]: time="2024-02-09T19:25:44.357153768Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 19:25:44.357568 env[1321]: time="2024-02-09T19:25:44.357530374Z" level=info msg="Daemon has completed initialization" Feb 9 19:25:44.403354 systemd[1]: Started docker.service. Feb 9 19:25:44.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:44.434484 env[1321]: time="2024-02-09T19:25:44.434310125Z" level=info msg="API listen on /run/docker.sock" Feb 9 19:25:44.481644 systemd[1]: Reloading. Feb 9 19:25:44.572638 /usr/lib/systemd/system-generators/torcx-generator[1460]: time="2024-02-09T19:25:44Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:25:44.572669 /usr/lib/systemd/system-generators/torcx-generator[1460]: time="2024-02-09T19:25:44Z" level=info msg="torcx already run" Feb 9 19:25:44.660587 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:25:44.660605 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:25:44.683057 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:25:44.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:44.761797 systemd[1]: Started kubelet.service. Feb 9 19:25:44.880057 kubelet[1512]: E0209 19:25:44.879865 1512 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:25:44.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 19:25:44.882606 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:25:44.882778 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:25:45.932189 env[1135]: time="2024-02-09T19:25:45.932117162Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 9 19:25:46.944224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1477107555.mount: Deactivated successfully. Feb 9 19:25:49.862813 env[1135]: time="2024-02-09T19:25:49.862727470Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:49.865824 env[1135]: time="2024-02-09T19:25:49.865768063Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:49.870997 env[1135]: time="2024-02-09T19:25:49.870951204Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:49.875163 env[1135]: time="2024-02-09T19:25:49.875114302Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:49.877465 env[1135]: time="2024-02-09T19:25:49.877397704Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f\"" Feb 9 19:25:49.890176 env[1135]: time="2024-02-09T19:25:49.890096290Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 9 19:25:52.920057 env[1135]: time="2024-02-09T19:25:52.919982271Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:52.928564 env[1135]: time="2024-02-09T19:25:52.928513351Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:52.930816 env[1135]: time="2024-02-09T19:25:52.930771175Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:52.935139 env[1135]: time="2024-02-09T19:25:52.935116665Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:52.937180 env[1135]: time="2024-02-09T19:25:52.937156842Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486\"" Feb 9 19:25:52.950462 env[1135]: time="2024-02-09T19:25:52.950433592Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 9 19:25:55.103774 env[1135]: time="2024-02-09T19:25:55.103712867Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:55.110415 env[1135]: time="2024-02-09T19:25:55.110367287Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:55.116125 env[1135]: time="2024-02-09T19:25:55.116054263Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:55.119051 env[1135]: time="2024-02-09T19:25:55.119012441Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:55.122237 env[1135]: time="2024-02-09T19:25:55.122160265Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e\"" Feb 9 19:25:55.133672 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 19:25:55.137441 kernel: kauditd_printk_skb: 87 callbacks suppressed Feb 9 19:25:55.137856 kernel: audit: type=1130 audit(1707506755.133:179): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:55.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:55.138672 env[1135]: time="2024-02-09T19:25:55.135864688Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 19:25:55.133908 systemd[1]: Stopped kubelet.service. Feb 9 19:25:55.135437 systemd[1]: Started kubelet.service. Feb 9 19:25:55.149962 kernel: audit: type=1131 audit(1707506755.133:180): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:55.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:55.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:55.157939 kernel: audit: type=1130 audit(1707506755.134:181): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:55.222487 kubelet[1546]: E0209 19:25:55.222441 1546 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:25:55.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 19:25:55.225994 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:25:55.226144 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:25:55.230937 kernel: audit: type=1131 audit(1707506755.225:182): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 19:25:57.263331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2570090978.mount: Deactivated successfully. Feb 9 19:25:57.939435 env[1135]: time="2024-02-09T19:25:57.939308526Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:57.941792 env[1135]: time="2024-02-09T19:25:57.941724307Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:57.943772 env[1135]: time="2024-02-09T19:25:57.943714830Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:57.946030 env[1135]: time="2024-02-09T19:25:57.945972725Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:57.946591 env[1135]: time="2024-02-09T19:25:57.946555197Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 9 19:25:57.957759 env[1135]: time="2024-02-09T19:25:57.957565797Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 19:25:58.559781 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3163881444.mount: Deactivated successfully. Feb 9 19:25:58.571355 env[1135]: time="2024-02-09T19:25:58.571270460Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:58.574442 env[1135]: time="2024-02-09T19:25:58.574388348Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:58.577501 env[1135]: time="2024-02-09T19:25:58.577452004Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:58.580755 env[1135]: time="2024-02-09T19:25:58.580701839Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:58.582419 env[1135]: time="2024-02-09T19:25:58.582362784Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 9 19:25:58.606885 env[1135]: time="2024-02-09T19:25:58.606822438Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 9 19:25:59.664447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1482845251.mount: Deactivated successfully. Feb 9 19:26:05.429167 kernel: audit: type=1130 audit(1707506765.417:183): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:05.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:05.418194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 9 19:26:05.441047 kernel: audit: type=1131 audit(1707506765.417:184): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:05.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:05.418452 systemd[1]: Stopped kubelet.service. Feb 9 19:26:05.420277 systemd[1]: Started kubelet.service. Feb 9 19:26:05.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:05.452934 kernel: audit: type=1130 audit(1707506765.419:185): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:05.551371 kubelet[1566]: E0209 19:26:05.551317 1566 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:26:05.552936 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:26:05.553079 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:26:05.557177 kernel: audit: type=1131 audit(1707506765.552:186): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 19:26:05.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 19:26:06.516871 env[1135]: time="2024-02-09T19:26:06.516769029Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:06.521219 env[1135]: time="2024-02-09T19:26:06.521150254Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:06.526988 env[1135]: time="2024-02-09T19:26:06.526362862Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:06.530831 env[1135]: time="2024-02-09T19:26:06.530746231Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:06.533787 env[1135]: time="2024-02-09T19:26:06.533686668Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7\"" Feb 9 19:26:06.561833 env[1135]: time="2024-02-09T19:26:06.561709642Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 9 19:26:07.255020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount947722210.mount: Deactivated successfully. Feb 9 19:26:09.115021 env[1135]: time="2024-02-09T19:26:09.114551049Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:09.117693 env[1135]: time="2024-02-09T19:26:09.117624258Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:09.120343 env[1135]: time="2024-02-09T19:26:09.120287282Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:09.122834 env[1135]: time="2024-02-09T19:26:09.122781256Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:09.123689 env[1135]: time="2024-02-09T19:26:09.123613779Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Feb 9 19:26:11.422372 update_engine[1123]: I0209 19:26:11.421972 1123 update_attempter.cc:509] Updating boot flags... Feb 9 19:26:14.369379 systemd[1]: Stopped kubelet.service. Feb 9 19:26:14.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:14.380942 kernel: audit: type=1130 audit(1707506774.370:187): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:14.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:14.390254 kernel: audit: type=1131 audit(1707506774.373:188): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:14.403843 systemd[1]: Reloading. Feb 9 19:26:14.493163 /usr/lib/systemd/system-generators/torcx-generator[1675]: time="2024-02-09T19:26:14Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:26:14.493193 /usr/lib/systemd/system-generators/torcx-generator[1675]: time="2024-02-09T19:26:14Z" level=info msg="torcx already run" Feb 9 19:26:14.578064 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:26:14.578084 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:26:14.600166 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:26:14.691317 systemd[1]: Started kubelet.service. Feb 9 19:26:14.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:14.702145 kernel: audit: type=1130 audit(1707506774.690:189): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:14.786984 kubelet[1728]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:26:14.786984 kubelet[1728]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:26:14.787334 kubelet[1728]: I0209 19:26:14.787102 1728 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:26:14.790711 kubelet[1728]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:26:14.790711 kubelet[1728]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:26:15.191286 kubelet[1728]: I0209 19:26:15.191235 1728 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 19:26:15.191286 kubelet[1728]: I0209 19:26:15.191260 1728 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:26:15.191633 kubelet[1728]: I0209 19:26:15.191466 1728 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 19:26:15.196801 kubelet[1728]: I0209 19:26:15.196758 1728 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:26:15.197092 kubelet[1728]: I0209 19:26:15.197055 1728 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:26:15.197197 kubelet[1728]: I0209 19:26:15.197141 1728 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:26:15.197197 kubelet[1728]: I0209 19:26:15.197160 1728 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:26:15.197197 kubelet[1728]: I0209 19:26:15.197172 1728 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 19:26:15.197519 kubelet[1728]: I0209 19:26:15.197253 1728 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:26:15.198272 kubelet[1728]: I0209 19:26:15.198232 1728 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:26:15.198753 kubelet[1728]: E0209 19:26:15.198642 1728 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.217:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.217:6443: connect: connection refused Feb 9 19:26:15.206337 kubelet[1728]: I0209 19:26:15.206302 1728 kubelet.go:398] "Attempting to sync node with API server" Feb 9 19:26:15.206581 kubelet[1728]: I0209 19:26:15.206556 1728 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:26:15.206779 kubelet[1728]: W0209 19:26:15.206720 1728 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.24.4.217:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-2-b-76a749f546.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.217:6443: connect: connection refused Feb 9 19:26:15.206779 kubelet[1728]: E0209 19:26:15.206774 1728 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.217:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-2-b-76a749f546.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.217:6443: connect: connection refused Feb 9 19:26:15.206779 kubelet[1728]: I0209 19:26:15.206737 1728 kubelet.go:297] "Adding apiserver pod source" Feb 9 19:26:15.207078 kubelet[1728]: I0209 19:26:15.206800 1728 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:26:15.207480 kubelet[1728]: W0209 19:26:15.207433 1728 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.24.4.217:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.217:6443: connect: connection refused Feb 9 19:26:15.207480 kubelet[1728]: E0209 19:26:15.207469 1728 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.217:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.217:6443: connect: connection refused Feb 9 19:26:15.207655 kubelet[1728]: I0209 19:26:15.207535 1728 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:26:15.207790 kubelet[1728]: W0209 19:26:15.207755 1728 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 19:26:15.208191 kubelet[1728]: I0209 19:26:15.208160 1728 server.go:1186] "Started kubelet" Feb 9 19:26:15.208000 audit[1728]: AVC avc: denied { mac_admin } for pid=1728 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:15.208000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:26:15.214839 kernel: audit: type=1400 audit(1707506775.208:190): avc: denied { mac_admin } for pid=1728 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:15.215038 kernel: audit: type=1401 audit(1707506775.208:190): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:26:15.215104 kubelet[1728]: I0209 19:26:15.214867 1728 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 9 19:26:15.215104 kubelet[1728]: I0209 19:26:15.214941 1728 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 9 19:26:15.215104 kubelet[1728]: I0209 19:26:15.215017 1728 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:26:15.208000 audit[1728]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000e391d0 a1=c000b67740 a2=c000e391a0 a3=25 items=0 ppid=1 pid=1728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:15.219687 kubelet[1728]: I0209 19:26:15.219654 1728 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:26:15.208000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:26:15.224160 kubelet[1728]: I0209 19:26:15.224092 1728 server.go:451] "Adding debug handlers to kubelet server" Feb 9 19:26:15.225171 kernel: audit: type=1300 audit(1707506775.208:190): arch=c000003e syscall=188 success=no exit=-22 a0=c000e391d0 a1=c000b67740 a2=c000e391a0 a3=25 items=0 ppid=1 pid=1728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:15.225292 kernel: audit: type=1327 audit(1707506775.208:190): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:26:15.225350 kernel: audit: type=1400 audit(1707506775.214:191): avc: denied { mac_admin } for pid=1728 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:15.214000 audit[1728]: AVC avc: denied { mac_admin } for pid=1728 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:15.228706 kernel: audit: type=1401 audit(1707506775.214:191): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:26:15.214000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:26:15.230773 kubelet[1728]: I0209 19:26:15.230734 1728 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 19:26:15.231220 kubelet[1728]: I0209 19:26:15.231170 1728 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:26:15.232461 kubelet[1728]: E0209 19:26:15.231629 1728 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:26:15.232461 kubelet[1728]: E0209 19:26:15.231656 1728 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:26:15.233000 kubelet[1728]: W0209 19:26:15.232953 1728 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.24.4.217:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.217:6443: connect: connection refused Feb 9 19:26:15.233000 kubelet[1728]: E0209 19:26:15.233000 1728 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.217:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.217:6443: connect: connection refused Feb 9 19:26:15.233236 kubelet[1728]: E0209 19:26:15.233054 1728 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://172.24.4.217:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-b-76a749f546.novalocal?timeout=10s": dial tcp 172.24.4.217:6443: connect: connection refused Feb 9 19:26:15.214000 audit[1728]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0003ff500 a1=c000b67758 a2=c000e39260 a3=25 items=0 ppid=1 pid=1728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:15.214000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:26:15.239944 kernel: audit: type=1300 audit(1707506775.214:191): arch=c000003e syscall=188 success=no exit=-22 a0=c0003ff500 a1=c000b67758 a2=c000e39260 a3=25 items=0 ppid=1 pid=1728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:15.240000 audit[1740]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1740 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:15.240000 audit[1740]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffecec6dfb0 a2=0 a3=7ffecec6df9c items=0 ppid=1728 pid=1740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:15.240000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 9 19:26:15.242369 kubelet[1728]: E0209 19:26:15.242248 1728 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-b-76a749f546.novalocal.17b2485d83da4aac", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-b-76a749f546.novalocal", UID:"ci-3510-3-2-b-76a749f546.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-b-76a749f546.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 26, 15, 208118956, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 26, 15, 208118956, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://172.24.4.217:6443/api/v1/namespaces/default/events": dial tcp 172.24.4.217:6443: connect: connection refused'(may retry after sleeping) Feb 9 19:26:15.243000 audit[1741]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1741 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:15.243000 audit[1741]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff03eb4bb0 a2=0 a3=7fff03eb4b9c items=0 ppid=1728 pid=1741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:15.243000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 9 19:26:15.249000 audit[1743]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1743 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:15.249000 audit[1743]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffeeaabe000 a2=0 a3=7ffeeaabdfec items=0 ppid=1728 pid=1743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:15.249000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 9 19:26:15.257000 audit[1745]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1745 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:15.257000 audit[1745]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe960f6d60 a2=0 a3=7ffe960f6d4c items=0 ppid=1728 pid=1745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:15.257000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 9 19:26:15.275000 audit[1751]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1751 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:15.275000 audit[1751]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffe46b42d80 a2=0 a3=7ffe46b42d6c items=0 ppid=1728 pid=1751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:15.275000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Feb 9 19:26:15.276000 audit[1752]: NETFILTER_CFG table=nat:31 family=2 entries=1 op=nft_register_chain pid=1752 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:15.276000 audit[1752]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc1e18fd80 a2=0 a3=7ffc1e18fd6c items=0 ppid=1728 pid=1752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:15.276000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 9 19:26:15.284000 audit[1755]: NETFILTER_CFG table=nat:32 family=2 entries=1 op=nft_register_rule pid=1755 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:15.284000 audit[1755]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffd98d93e30 a2=0 a3=7ffd98d93e1c items=0 ppid=1728 pid=1755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:15.284000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 9 19:26:15.293000 audit[1758]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=1758 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:15.293000 audit[1758]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffd9af33f20 a2=0 a3=7ffd9af33f0c items=0 ppid=1728 pid=1758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:15.293000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 9 19:26:15.296000 audit[1759]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1759 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:15.296000 audit[1759]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe26766c10 a2=0 a3=7ffe26766bfc items=0 ppid=1728 pid=1759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:15.296000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 9 19:26:15.304000 audit[1760]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_chain pid=1760 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:15.304000 audit[1760]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc20075630 a2=0 a3=7ffc2007561c items=0 ppid=1728 pid=1760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:15.304000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 9 19:26:15.312000 audit[1762]: NETFILTER_CFG table=nat:36 family=2 entries=1 op=nft_register_rule pid=1762 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:15.312000 audit[1762]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffcc6c8d560 a2=0 a3=7ffcc6c8d54c items=0 ppid=1728 pid=1762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:15.312000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 9 19:26:15.314000 audit[1764]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_rule pid=1764 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:15.314000 audit[1764]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffdcc5a94c0 a2=0 a3=7ffdcc5a94ac items=0 ppid=1728 pid=1764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:15.314000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 9 19:26:15.316000 audit[1766]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_rule pid=1766 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:15.316000 audit[1766]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffee205f930 a2=0 a3=7ffee205f91c items=0 ppid=1728 pid=1766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:15.316000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 9 19:26:15.318000 audit[1768]: NETFILTER_CFG table=nat:39 family=2 entries=1 op=nft_register_rule pid=1768 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:15.318000 audit[1768]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffe494de1c0 a2=0 a3=7ffe494de1ac items=0 ppid=1728 pid=1768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:15.318000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 9 19:26:15.320000 audit[1770]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_rule pid=1770 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:15.320000 audit[1770]: SYSCALL arch=c000003e syscall=46 success=yes exit=540 a0=3 a1=7fffe7e89340 a2=0 a3=7fffe7e8932c items=0 ppid=1728 pid=1770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:15.320000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 9 19:26:15.321775 kubelet[1728]: I0209 19:26:15.321760 1728 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:26:15.321000 audit[1771]: NETFILTER_CFG table=mangle:41 family=10 entries=2 op=nft_register_chain pid=1771 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:15.321000 audit[1771]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffcee7424f0 a2=0 a3=7ffcee7424dc items=0 ppid=1728 pid=1771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:15.321000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 9 19:26:15.322000 audit[1772]: NETFILTER_CFG table=mangle:42 family=2 entries=1 op=nft_register_chain pid=1772 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:15.322000 audit[1772]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc2fba0290 a2=0 a3=7ffc2fba027c items=0 ppid=1728 pid=1772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:15.322000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 9 19:26:15.323000 audit[1773]: NETFILTER_CFG table=nat:43 family=10 entries=2 op=nft_register_chain pid=1773 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:15.323000 audit[1773]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe05ade980 a2=0 a3=7ffe05ade96c items=0 ppid=1728 pid=1773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:15.323000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 9 19:26:15.325037 kubelet[1728]: I0209 19:26:15.324966 1728 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:26:15.325037 kubelet[1728]: I0209 19:26:15.324979 1728 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:26:15.325037 kubelet[1728]: I0209 19:26:15.324992 1728 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:26:15.325000 audit[1774]: NETFILTER_CFG table=nat:44 family=2 entries=1 op=nft_register_chain pid=1774 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:15.325000 audit[1774]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcec123a10 a2=0 a3=7ffcec1239fc items=0 ppid=1728 pid=1774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:15.325000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 9 19:26:15.327000 audit[1776]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_chain pid=1776 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:15.327000 audit[1776]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd401cc8d0 a2=0 a3=7ffd401cc8bc items=0 ppid=1728 pid=1776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:15.327000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 9 19:26:15.328000 audit[1777]: NETFILTER_CFG table=nat:46 family=10 entries=1 op=nft_register_rule pid=1777 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:15.328000 audit[1777]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fffe8bf4740 a2=0 a3=7fffe8bf472c items=0 ppid=1728 pid=1777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:15.328000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 9 19:26:15.329847 kubelet[1728]: I0209 19:26:15.329819 1728 policy_none.go:49] "None policy: Start" Feb 9 19:26:15.330410 kubelet[1728]: I0209 19:26:15.330395 1728 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:26:15.330462 kubelet[1728]: I0209 19:26:15.330420 1728 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:26:15.330000 audit[1778]: NETFILTER_CFG table=filter:47 family=10 entries=2 op=nft_register_chain pid=1778 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:15.330000 audit[1778]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffee4e4e8a0 a2=0 a3=7ffee4e4e88c items=0 ppid=1728 pid=1778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:15.330000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 9 19:26:15.337030 kubelet[1728]: I0209 19:26:15.337004 1728 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:26:15.336000 audit[1728]: AVC avc: denied { mac_admin } for pid=1728 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:15.336000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:26:15.337000 audit[1780]: NETFILTER_CFG table=filter:48 family=10 entries=1 op=nft_register_rule pid=1780 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:15.337000 audit[1780]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffe2e045b60 a2=0 a3=7ffe2e045b4c items=0 ppid=1728 pid=1780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:15.337000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 9 19:26:15.336000 audit[1728]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0002f1650 a1=c000f79f98 a2=c0002f15f0 a3=25 items=0 ppid=1 pid=1728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:15.336000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:26:15.338987 kubelet[1728]: I0209 19:26:15.338868 1728 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 9 19:26:15.339062 kubelet[1728]: I0209 19:26:15.339043 1728 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:26:15.338000 audit[1781]: NETFILTER_CFG table=nat:49 family=10 entries=1 op=nft_register_chain pid=1781 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:15.338000 audit[1781]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffccf7623a0 a2=0 a3=7ffccf76238c items=0 ppid=1728 pid=1781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:15.338000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 9 19:26:15.341000 audit[1782]: NETFILTER_CFG table=nat:50 family=10 entries=1 op=nft_register_chain pid=1782 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:15.341000 audit[1782]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff6ac85bf0 a2=0 a3=7fff6ac85bdc items=0 ppid=1728 pid=1782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:15.341000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 9 19:26:15.343000 audit[1784]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_rule pid=1784 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:15.343000 audit[1784]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffca69dfbf0 a2=0 a3=7ffca69dfbdc items=0 ppid=1728 pid=1784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:15.343000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 9 19:26:15.345000 audit[1786]: NETFILTER_CFG table=nat:52 family=10 entries=2 op=nft_register_chain pid=1786 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:15.345000 audit[1786]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff81653c80 a2=0 a3=7fff81653c6c items=0 ppid=1728 pid=1786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:15.345000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 9 19:26:15.347000 audit[1788]: NETFILTER_CFG table=nat:53 family=10 entries=1 op=nft_register_rule pid=1788 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:15.347000 audit[1788]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffe8791b640 a2=0 a3=7ffe8791b62c items=0 ppid=1728 pid=1788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:15.347000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 9 19:26:15.349000 audit[1790]: NETFILTER_CFG table=nat:54 family=10 entries=1 op=nft_register_rule pid=1790 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:15.349000 audit[1790]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7fff2787dd20 a2=0 a3=7fff2787dd0c items=0 ppid=1728 pid=1790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:15.349000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 9 19:26:15.352000 audit[1792]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_rule pid=1792 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:15.352000 audit[1792]: SYSCALL arch=c000003e syscall=46 success=yes exit=556 a0=3 a1=7fff45e3f9b0 a2=0 a3=7fff45e3f99c items=0 ppid=1728 pid=1792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:15.352000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 9 19:26:15.354097 kubelet[1728]: I0209 19:26:15.354085 1728 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:26:15.354170 kubelet[1728]: I0209 19:26:15.354161 1728 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 19:26:15.354235 kubelet[1728]: I0209 19:26:15.354226 1728 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 19:26:15.354322 kubelet[1728]: E0209 19:26:15.354313 1728 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 19:26:15.354000 audit[1793]: NETFILTER_CFG table=mangle:56 family=10 entries=1 op=nft_register_chain pid=1793 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:15.354000 audit[1793]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff4b4f4d20 a2=0 a3=7fff4b4f4d0c items=0 ppid=1728 pid=1793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:15.354000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 9 19:26:15.356048 kubelet[1728]: W0209 19:26:15.356015 1728 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.24.4.217:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.217:6443: connect: connection refused Feb 9 19:26:15.356159 kubelet[1728]: E0209 19:26:15.356149 1728 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.217:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.217:6443: connect: connection refused Feb 9 19:26:15.356357 kubelet[1728]: E0209 19:26:15.356344 1728 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-2-b-76a749f546.novalocal\" not found" Feb 9 19:26:15.356706 kubelet[1728]: I0209 19:26:15.356688 1728 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:26:15.357073 kubelet[1728]: E0209 19:26:15.357058 1728 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.217:6443/api/v1/nodes\": dial tcp 172.24.4.217:6443: connect: connection refused" node="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:26:15.356000 audit[1794]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=1794 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:15.356000 audit[1794]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffda547f360 a2=0 a3=7ffda547f34c items=0 ppid=1728 pid=1794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:15.356000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 9 19:26:15.357000 audit[1795]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=1795 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:15.357000 audit[1795]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd2e2b1360 a2=0 a3=7ffd2e2b134c items=0 ppid=1728 pid=1795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:15.357000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 9 19:26:15.433757 kubelet[1728]: E0209 19:26:15.433705 1728 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://172.24.4.217:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-b-76a749f546.novalocal?timeout=10s": dial tcp 172.24.4.217:6443: connect: connection refused Feb 9 19:26:15.455201 kubelet[1728]: I0209 19:26:15.455000 1728 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:26:15.466357 kubelet[1728]: I0209 19:26:15.466314 1728 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:26:15.469221 kubelet[1728]: I0209 19:26:15.469181 1728 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:26:15.470518 kubelet[1728]: I0209 19:26:15.470478 1728 status_manager.go:698] "Failed to get status for pod" podUID=540a4d9839dd742a7133bf7cb31353af pod="kube-system/kube-scheduler-ci-3510-3-2-b-76a749f546.novalocal" err="Get \"https://172.24.4.217:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510-3-2-b-76a749f546.novalocal\": dial tcp 172.24.4.217:6443: connect: connection refused" Feb 9 19:26:15.478075 kubelet[1728]: I0209 19:26:15.478026 1728 status_manager.go:698] "Failed to get status for pod" podUID=aef650e3899f4d42da56ac5fc9eb41cc pod="kube-system/kube-apiserver-ci-3510-3-2-b-76a749f546.novalocal" err="Get \"https://172.24.4.217:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510-3-2-b-76a749f546.novalocal\": dial tcp 172.24.4.217:6443: connect: connection refused" Feb 9 19:26:15.478579 kubelet[1728]: I0209 19:26:15.478523 1728 status_manager.go:698] "Failed to get status for pod" podUID=229e589d9a73b3f91e52ba60ae6ecba1 pod="kube-system/kube-controller-manager-ci-3510-3-2-b-76a749f546.novalocal" err="Get \"https://172.24.4.217:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510-3-2-b-76a749f546.novalocal\": dial tcp 172.24.4.217:6443: connect: connection refused" Feb 9 19:26:15.559822 kubelet[1728]: I0209 19:26:15.559754 1728 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:26:15.560956 kubelet[1728]: E0209 19:26:15.560872 1728 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.217:6443/api/v1/nodes\": dial tcp 172.24.4.217:6443: connect: connection refused" node="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:26:15.632354 kubelet[1728]: I0209 19:26:15.632301 1728 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/540a4d9839dd742a7133bf7cb31353af-kubeconfig\") pod \"kube-scheduler-ci-3510-3-2-b-76a749f546.novalocal\" (UID: \"540a4d9839dd742a7133bf7cb31353af\") " pod="kube-system/kube-scheduler-ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:26:15.632711 kubelet[1728]: I0209 19:26:15.632683 1728 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aef650e3899f4d42da56ac5fc9eb41cc-k8s-certs\") pod \"kube-apiserver-ci-3510-3-2-b-76a749f546.novalocal\" (UID: \"aef650e3899f4d42da56ac5fc9eb41cc\") " pod="kube-system/kube-apiserver-ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:26:15.633027 kubelet[1728]: I0209 19:26:15.632999 1728 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/229e589d9a73b3f91e52ba60ae6ecba1-ca-certs\") pod \"kube-controller-manager-ci-3510-3-2-b-76a749f546.novalocal\" (UID: \"229e589d9a73b3f91e52ba60ae6ecba1\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:26:15.633278 kubelet[1728]: I0209 19:26:15.633253 1728 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/229e589d9a73b3f91e52ba60ae6ecba1-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-2-b-76a749f546.novalocal\" (UID: \"229e589d9a73b3f91e52ba60ae6ecba1\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:26:15.633503 kubelet[1728]: I0209 19:26:15.633479 1728 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/229e589d9a73b3f91e52ba60ae6ecba1-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-2-b-76a749f546.novalocal\" (UID: \"229e589d9a73b3f91e52ba60ae6ecba1\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:26:15.633728 kubelet[1728]: I0209 19:26:15.633704 1728 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aef650e3899f4d42da56ac5fc9eb41cc-ca-certs\") pod \"kube-apiserver-ci-3510-3-2-b-76a749f546.novalocal\" (UID: \"aef650e3899f4d42da56ac5fc9eb41cc\") " pod="kube-system/kube-apiserver-ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:26:15.634020 kubelet[1728]: I0209 19:26:15.633992 1728 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aef650e3899f4d42da56ac5fc9eb41cc-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-2-b-76a749f546.novalocal\" (UID: \"aef650e3899f4d42da56ac5fc9eb41cc\") " pod="kube-system/kube-apiserver-ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:26:15.634246 kubelet[1728]: I0209 19:26:15.634223 1728 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/229e589d9a73b3f91e52ba60ae6ecba1-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-2-b-76a749f546.novalocal\" (UID: \"229e589d9a73b3f91e52ba60ae6ecba1\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:26:15.634492 kubelet[1728]: I0209 19:26:15.634467 1728 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/229e589d9a73b3f91e52ba60ae6ecba1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-2-b-76a749f546.novalocal\" (UID: \"229e589d9a73b3f91e52ba60ae6ecba1\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:26:15.786187 env[1135]: time="2024-02-09T19:26:15.785953577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-2-b-76a749f546.novalocal,Uid:540a4d9839dd742a7133bf7cb31353af,Namespace:kube-system,Attempt:0,}" Feb 9 19:26:15.791735 env[1135]: time="2024-02-09T19:26:15.791670085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-2-b-76a749f546.novalocal,Uid:229e589d9a73b3f91e52ba60ae6ecba1,Namespace:kube-system,Attempt:0,}" Feb 9 19:26:15.793784 env[1135]: time="2024-02-09T19:26:15.793163450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-2-b-76a749f546.novalocal,Uid:aef650e3899f4d42da56ac5fc9eb41cc,Namespace:kube-system,Attempt:0,}" Feb 9 19:26:15.835165 kubelet[1728]: E0209 19:26:15.835052 1728 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://172.24.4.217:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-b-76a749f546.novalocal?timeout=10s": dial tcp 172.24.4.217:6443: connect: connection refused Feb 9 19:26:15.963695 kubelet[1728]: I0209 19:26:15.963638 1728 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:26:15.964302 kubelet[1728]: E0209 19:26:15.964269 1728 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.217:6443/api/v1/nodes\": dial tcp 172.24.4.217:6443: connect: connection refused" node="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:26:16.302462 kubelet[1728]: W0209 19:26:16.302326 1728 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.24.4.217:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-2-b-76a749f546.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.217:6443: connect: connection refused Feb 9 19:26:16.302462 kubelet[1728]: E0209 19:26:16.302441 1728 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.217:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-2-b-76a749f546.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.217:6443: connect: connection refused Feb 9 19:26:16.307257 kubelet[1728]: W0209 19:26:16.307177 1728 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.24.4.217:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.217:6443: connect: connection refused Feb 9 19:26:16.307257 kubelet[1728]: E0209 19:26:16.307248 1728 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.217:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.217:6443: connect: connection refused Feb 9 19:26:16.390091 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3730019696.mount: Deactivated successfully. Feb 9 19:26:16.401783 env[1135]: time="2024-02-09T19:26:16.401701329Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:16.404632 env[1135]: time="2024-02-09T19:26:16.404546121Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:16.409034 env[1135]: time="2024-02-09T19:26:16.408977924Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:16.411697 env[1135]: time="2024-02-09T19:26:16.411641615Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:16.415796 env[1135]: time="2024-02-09T19:26:16.415741312Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:16.419022 kubelet[1728]: W0209 19:26:16.418876 1728 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.24.4.217:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.217:6443: connect: connection refused Feb 9 19:26:16.419152 kubelet[1728]: E0209 19:26:16.419039 1728 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.217:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.217:6443: connect: connection refused Feb 9 19:26:16.422860 env[1135]: time="2024-02-09T19:26:16.422787473Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:16.424482 kubelet[1728]: W0209 19:26:16.424394 1728 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.24.4.217:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.217:6443: connect: connection refused Feb 9 19:26:16.424482 kubelet[1728]: E0209 19:26:16.424489 1728 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.217:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.217:6443: connect: connection refused Feb 9 19:26:16.434628 env[1135]: time="2024-02-09T19:26:16.434574309Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:16.446858 env[1135]: time="2024-02-09T19:26:16.446800262Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:16.449748 env[1135]: time="2024-02-09T19:26:16.449696581Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:16.458761 env[1135]: time="2024-02-09T19:26:16.458721732Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:16.463095 env[1135]: time="2024-02-09T19:26:16.463070990Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:16.463849 env[1135]: time="2024-02-09T19:26:16.463829189Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:16.478790 env[1135]: time="2024-02-09T19:26:16.478523114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:26:16.478790 env[1135]: time="2024-02-09T19:26:16.478589449Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:26:16.478790 env[1135]: time="2024-02-09T19:26:16.478652417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:26:16.482789 env[1135]: time="2024-02-09T19:26:16.479376963Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/83b24451db336e3e9572cacf86023f967431a6c6c125428d0f08c710a53667cf pid=1804 runtime=io.containerd.runc.v2 Feb 9 19:26:16.527775 env[1135]: time="2024-02-09T19:26:16.527677451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:26:16.527775 env[1135]: time="2024-02-09T19:26:16.527741581Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:26:16.528107 env[1135]: time="2024-02-09T19:26:16.527756950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:26:16.528818 env[1135]: time="2024-02-09T19:26:16.528770942Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/743cc303b58d2e1ced82d0d98cad942d2475f57a7c902b1c4c531131431f2193 pid=1837 runtime=io.containerd.runc.v2 Feb 9 19:26:16.533070 env[1135]: time="2024-02-09T19:26:16.532996085Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:26:16.533136 env[1135]: time="2024-02-09T19:26:16.533087568Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:26:16.533136 env[1135]: time="2024-02-09T19:26:16.533117083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:26:16.533296 env[1135]: time="2024-02-09T19:26:16.533260183Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/87cd1a69e655c3b6767de76fda3dcfea7a78a8a25d559a4eb01cb74d2b37de16 pid=1855 runtime=io.containerd.runc.v2 Feb 9 19:26:16.606621 env[1135]: time="2024-02-09T19:26:16.606473437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-2-b-76a749f546.novalocal,Uid:229e589d9a73b3f91e52ba60ae6ecba1,Namespace:kube-system,Attempt:0,} returns sandbox id \"83b24451db336e3e9572cacf86023f967431a6c6c125428d0f08c710a53667cf\"" Feb 9 19:26:16.610880 env[1135]: time="2024-02-09T19:26:16.610831251Z" level=info msg="CreateContainer within sandbox \"83b24451db336e3e9572cacf86023f967431a6c6c125428d0f08c710a53667cf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 19:26:16.615563 env[1135]: time="2024-02-09T19:26:16.615492015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-2-b-76a749f546.novalocal,Uid:aef650e3899f4d42da56ac5fc9eb41cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"743cc303b58d2e1ced82d0d98cad942d2475f57a7c902b1c4c531131431f2193\"" Feb 9 19:26:16.618190 env[1135]: time="2024-02-09T19:26:16.618156727Z" level=info msg="CreateContainer within sandbox \"743cc303b58d2e1ced82d0d98cad942d2475f57a7c902b1c4c531131431f2193\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 19:26:16.634064 env[1135]: time="2024-02-09T19:26:16.634018754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-2-b-76a749f546.novalocal,Uid:540a4d9839dd742a7133bf7cb31353af,Namespace:kube-system,Attempt:0,} returns sandbox id \"87cd1a69e655c3b6767de76fda3dcfea7a78a8a25d559a4eb01cb74d2b37de16\"" Feb 9 19:26:16.635561 kubelet[1728]: E0209 19:26:16.635533 1728 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://172.24.4.217:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-b-76a749f546.novalocal?timeout=10s": dial tcp 172.24.4.217:6443: connect: connection refused Feb 9 19:26:16.637429 env[1135]: time="2024-02-09T19:26:16.637399747Z" level=info msg="CreateContainer within sandbox \"87cd1a69e655c3b6767de76fda3dcfea7a78a8a25d559a4eb01cb74d2b37de16\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 19:26:16.657992 env[1135]: time="2024-02-09T19:26:16.657950332Z" level=info msg="CreateContainer within sandbox \"743cc303b58d2e1ced82d0d98cad942d2475f57a7c902b1c4c531131431f2193\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"50b85e90b3ffe3f839615c85b71fba2c9951af6be7fff30ba45c6ab144a1858d\"" Feb 9 19:26:16.661942 env[1135]: time="2024-02-09T19:26:16.661849701Z" level=info msg="CreateContainer within sandbox \"83b24451db336e3e9572cacf86023f967431a6c6c125428d0f08c710a53667cf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"577b1c16b01d54412637fe95f05c71d5fbd531827c1495fde9613341c5d7f618\"" Feb 9 19:26:16.662424 env[1135]: time="2024-02-09T19:26:16.662394558Z" level=info msg="StartContainer for \"577b1c16b01d54412637fe95f05c71d5fbd531827c1495fde9613341c5d7f618\"" Feb 9 19:26:16.669725 env[1135]: time="2024-02-09T19:26:16.669681402Z" level=info msg="StartContainer for \"50b85e90b3ffe3f839615c85b71fba2c9951af6be7fff30ba45c6ab144a1858d\"" Feb 9 19:26:16.672255 env[1135]: time="2024-02-09T19:26:16.672217462Z" level=info msg="CreateContainer within sandbox \"87cd1a69e655c3b6767de76fda3dcfea7a78a8a25d559a4eb01cb74d2b37de16\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"edf4cd373227ca3cb0b0418a1e078c2df6324b6dd6006db876743caa022c08e0\"" Feb 9 19:26:16.672766 env[1135]: time="2024-02-09T19:26:16.672745007Z" level=info msg="StartContainer for \"edf4cd373227ca3cb0b0418a1e078c2df6324b6dd6006db876743caa022c08e0\"" Feb 9 19:26:16.766816 kubelet[1728]: I0209 19:26:16.766374 1728 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:26:16.766816 kubelet[1728]: E0209 19:26:16.766736 1728 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.217:6443/api/v1/nodes\": dial tcp 172.24.4.217:6443: connect: connection refused" node="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:26:16.798695 env[1135]: time="2024-02-09T19:26:16.798645265Z" level=info msg="StartContainer for \"577b1c16b01d54412637fe95f05c71d5fbd531827c1495fde9613341c5d7f618\" returns successfully" Feb 9 19:26:16.821826 env[1135]: time="2024-02-09T19:26:16.821774218Z" level=info msg="StartContainer for \"50b85e90b3ffe3f839615c85b71fba2c9951af6be7fff30ba45c6ab144a1858d\" returns successfully" Feb 9 19:26:16.832317 env[1135]: time="2024-02-09T19:26:16.832200531Z" level=info msg="StartContainer for \"edf4cd373227ca3cb0b0418a1e078c2df6324b6dd6006db876743caa022c08e0\" returns successfully" Feb 9 19:26:17.247350 kubelet[1728]: E0209 19:26:17.247323 1728 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.217:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.217:6443: connect: connection refused Feb 9 19:26:17.368442 kubelet[1728]: I0209 19:26:17.368402 1728 status_manager.go:698] "Failed to get status for pod" podUID=540a4d9839dd742a7133bf7cb31353af pod="kube-system/kube-scheduler-ci-3510-3-2-b-76a749f546.novalocal" err="Get \"https://172.24.4.217:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510-3-2-b-76a749f546.novalocal\": dial tcp 172.24.4.217:6443: connect: connection refused" Feb 9 19:26:17.375075 kubelet[1728]: I0209 19:26:17.370655 1728 status_manager.go:698] "Failed to get status for pod" podUID=aef650e3899f4d42da56ac5fc9eb41cc pod="kube-system/kube-apiserver-ci-3510-3-2-b-76a749f546.novalocal" err="Get \"https://172.24.4.217:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510-3-2-b-76a749f546.novalocal\": dial tcp 172.24.4.217:6443: connect: connection refused" Feb 9 19:26:17.407396 kubelet[1728]: I0209 19:26:17.407365 1728 status_manager.go:698] "Failed to get status for pod" podUID=229e589d9a73b3f91e52ba60ae6ecba1 pod="kube-system/kube-controller-manager-ci-3510-3-2-b-76a749f546.novalocal" err="Get \"https://172.24.4.217:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510-3-2-b-76a749f546.novalocal\": dial tcp 172.24.4.217:6443: connect: connection refused" Feb 9 19:26:18.034679 kubelet[1728]: W0209 19:26:18.034610 1728 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.24.4.217:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.217:6443: connect: connection refused Feb 9 19:26:18.034679 kubelet[1728]: E0209 19:26:18.034673 1728 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.217:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.217:6443: connect: connection refused Feb 9 19:26:18.369598 kubelet[1728]: I0209 19:26:18.369479 1728 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:26:20.835136 kubelet[1728]: I0209 19:26:20.835100 1728 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:26:21.210294 kubelet[1728]: I0209 19:26:21.210237 1728 apiserver.go:52] "Watching apiserver" Feb 9 19:26:21.231685 kubelet[1728]: I0209 19:26:21.231611 1728 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:26:21.267008 kubelet[1728]: I0209 19:26:21.266971 1728 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:26:21.409518 kubelet[1728]: E0209 19:26:21.409296 1728 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-b-76a749f546.novalocal.17b2485d83da4aac", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-b-76a749f546.novalocal", UID:"ci-3510-3-2-b-76a749f546.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-b-76a749f546.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 26, 15, 208118956, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 26, 15, 208118956, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:26:21.468252 kubelet[1728]: E0209 19:26:21.467978 1728 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-b-76a749f546.novalocal.17b2485d85414d83", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-b-76a749f546.novalocal", UID:"ci-3510-3-2-b-76a749f546.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-b-76a749f546.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 26, 15, 231647107, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 26, 15, 231647107, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:26:21.525811 kubelet[1728]: E0209 19:26:21.525651 1728 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-b-76a749f546.novalocal.17b2485d8ac89738", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-b-76a749f546.novalocal", UID:"ci-3510-3-2-b-76a749f546.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510-3-2-b-76a749f546.novalocal status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-b-76a749f546.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 26, 15, 324399416, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 26, 15, 324399416, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:26:21.582672 kubelet[1728]: E0209 19:26:21.582498 1728 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-b-76a749f546.novalocal.17b2485d8ac8aa3d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-b-76a749f546.novalocal", UID:"ci-3510-3-2-b-76a749f546.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510-3-2-b-76a749f546.novalocal status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-b-76a749f546.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 26, 15, 324404285, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 26, 15, 324404285, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:26:21.639856 kubelet[1728]: E0209 19:26:21.639573 1728 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-b-76a749f546.novalocal.17b2485d8ac8c0b4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-b-76a749f546.novalocal", UID:"ci-3510-3-2-b-76a749f546.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ci-3510-3-2-b-76a749f546.novalocal status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-b-76a749f546.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 26, 15, 324410036, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 26, 15, 324410036, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:26:21.700104 kubelet[1728]: E0209 19:26:21.699756 1728 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-b-76a749f546.novalocal.17b2485d8bb47d05", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-b-76a749f546.novalocal", UID:"ci-3510-3-2-b-76a749f546.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-b-76a749f546.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 26, 15, 339859205, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 26, 15, 339859205, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:26:21.770535 kubelet[1728]: E0209 19:26:21.770159 1728 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-b-76a749f546.novalocal.17b2485d8ac89738", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-b-76a749f546.novalocal", UID:"ci-3510-3-2-b-76a749f546.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510-3-2-b-76a749f546.novalocal status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-b-76a749f546.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 26, 15, 324399416, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 26, 15, 356608935, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:26:21.833097 kubelet[1728]: E0209 19:26:21.832821 1728 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-b-76a749f546.novalocal.17b2485d8ac8aa3d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-b-76a749f546.novalocal", UID:"ci-3510-3-2-b-76a749f546.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510-3-2-b-76a749f546.novalocal status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-b-76a749f546.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 26, 15, 324404285, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 26, 15, 356614806, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:26:21.899857 kubelet[1728]: E0209 19:26:21.899637 1728 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-b-76a749f546.novalocal.17b2485d8ac8c0b4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-b-76a749f546.novalocal", UID:"ci-3510-3-2-b-76a749f546.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ci-3510-3-2-b-76a749f546.novalocal status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-b-76a749f546.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 26, 15, 324410036, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 26, 15, 356620076, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:26:22.212644 kubelet[1728]: E0209 19:26:22.212477 1728 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-b-76a749f546.novalocal.17b2485d8ac89738", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-b-76a749f546.novalocal", UID:"ci-3510-3-2-b-76a749f546.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510-3-2-b-76a749f546.novalocal status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-b-76a749f546.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 26, 15, 324399416, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 26, 15, 466207285, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:26:23.935186 systemd[1]: Reloading. Feb 9 19:26:24.048324 /usr/lib/systemd/system-generators/torcx-generator[2056]: time="2024-02-09T19:26:24Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:26:24.048360 /usr/lib/systemd/system-generators/torcx-generator[2056]: time="2024-02-09T19:26:24Z" level=info msg="torcx already run" Feb 9 19:26:24.132686 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:26:24.132706 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:26:24.155389 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:26:24.260435 kubelet[1728]: I0209 19:26:24.259398 1728 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:26:24.259918 systemd[1]: Stopping kubelet.service... Feb 9 19:26:24.279326 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 19:26:24.279722 systemd[1]: Stopped kubelet.service. Feb 9 19:26:24.278000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:24.282317 systemd[1]: Started kubelet.service. Feb 9 19:26:24.282454 kernel: kauditd_printk_skb: 104 callbacks suppressed Feb 9 19:26:24.282513 kernel: audit: type=1131 audit(1707506784.278:226): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:24.299512 kernel: audit: type=1130 audit(1707506784.282:227): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:24.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:24.418187 kubelet[2109]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:26:24.418187 kubelet[2109]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:26:24.418605 kubelet[2109]: I0209 19:26:24.418255 2109 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:26:24.419816 kubelet[2109]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:26:24.419816 kubelet[2109]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:26:24.423269 kubelet[2109]: I0209 19:26:24.423245 2109 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 19:26:24.423377 kubelet[2109]: I0209 19:26:24.423360 2109 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:26:24.423762 kubelet[2109]: I0209 19:26:24.423748 2109 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 19:26:24.425813 kubelet[2109]: I0209 19:26:24.425798 2109 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 19:26:24.428536 kubelet[2109]: I0209 19:26:24.428513 2109 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:26:24.432367 kubelet[2109]: I0209 19:26:24.432340 2109 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:26:24.432853 kubelet[2109]: I0209 19:26:24.432836 2109 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:26:24.432969 kubelet[2109]: I0209 19:26:24.432952 2109 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:26:24.433067 kubelet[2109]: I0209 19:26:24.432982 2109 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:26:24.433067 kubelet[2109]: I0209 19:26:24.433001 2109 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 19:26:24.433067 kubelet[2109]: I0209 19:26:24.433033 2109 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:26:24.436706 kubelet[2109]: I0209 19:26:24.436692 2109 kubelet.go:398] "Attempting to sync node with API server" Feb 9 19:26:24.437247 kubelet[2109]: I0209 19:26:24.437235 2109 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:26:24.437366 kubelet[2109]: I0209 19:26:24.437354 2109 kubelet.go:297] "Adding apiserver pod source" Feb 9 19:26:24.437441 kubelet[2109]: I0209 19:26:24.437431 2109 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:26:24.445207 kubelet[2109]: I0209 19:26:24.445189 2109 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:26:24.446112 kubelet[2109]: I0209 19:26:24.446100 2109 server.go:1186] "Started kubelet" Feb 9 19:26:24.454731 kernel: audit: type=1400 audit(1707506784.447:228): avc: denied { mac_admin } for pid=2109 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:24.454930 kernel: audit: type=1401 audit(1707506784.447:228): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:26:24.447000 audit[2109]: AVC avc: denied { mac_admin } for pid=2109 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:24.447000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:26:24.447000 audit[2109]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000aec480 a1=c000aa4600 a2=c000aec450 a3=25 items=0 ppid=1 pid=2109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:24.461213 kubelet[2109]: I0209 19:26:24.461197 2109 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:26:24.461917 kernel: audit: type=1300 audit(1707506784.447:228): arch=c000003e syscall=188 success=no exit=-22 a0=c000aec480 a1=c000aa4600 a2=c000aec450 a3=25 items=0 ppid=1 pid=2109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:24.462088 kubelet[2109]: I0209 19:26:24.462074 2109 server.go:451] "Adding debug handlers to kubelet server" Feb 9 19:26:24.472841 kernel: audit: type=1327 audit(1707506784.447:228): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:26:24.447000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:26:24.473315 kubelet[2109]: E0209 19:26:24.468130 2109 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:26:24.473315 kubelet[2109]: E0209 19:26:24.468177 2109 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:26:24.474637 kubelet[2109]: I0209 19:26:24.474130 2109 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 9 19:26:24.474637 kubelet[2109]: I0209 19:26:24.474200 2109 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 9 19:26:24.474637 kubelet[2109]: I0209 19:26:24.474224 2109 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:26:24.473000 audit[2109]: AVC avc: denied { mac_admin } for pid=2109 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:24.480483 kernel: audit: type=1400 audit(1707506784.473:229): avc: denied { mac_admin } for pid=2109 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:24.480528 kernel: audit: type=1401 audit(1707506784.473:229): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:26:24.473000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:26:24.485847 kernel: audit: type=1300 audit(1707506784.473:229): arch=c000003e syscall=188 success=no exit=-22 a0=c000bf7360 a1=c000ad7338 a2=c000e911a0 a3=25 items=0 ppid=1 pid=2109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:24.473000 audit[2109]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000bf7360 a1=c000ad7338 a2=c000e911a0 a3=25 items=0 ppid=1 pid=2109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:24.485990 kubelet[2109]: I0209 19:26:24.485101 2109 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 19:26:24.485990 kubelet[2109]: I0209 19:26:24.485199 2109 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:26:24.473000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:26:24.492499 kernel: audit: type=1327 audit(1707506784.473:229): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:26:24.535372 kubelet[2109]: I0209 19:26:24.533656 2109 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:26:24.556472 kubelet[2109]: I0209 19:26:24.556445 2109 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:26:24.556472 kubelet[2109]: I0209 19:26:24.556469 2109 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 19:26:24.556694 kubelet[2109]: I0209 19:26:24.556486 2109 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 19:26:24.556694 kubelet[2109]: E0209 19:26:24.556531 2109 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 19:26:24.572087 kubelet[2109]: I0209 19:26:24.571847 2109 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:26:24.572087 kubelet[2109]: I0209 19:26:24.571867 2109 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:26:24.572087 kubelet[2109]: I0209 19:26:24.571881 2109 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:26:24.573587 kubelet[2109]: I0209 19:26:24.573569 2109 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 19:26:24.573672 kubelet[2109]: I0209 19:26:24.573592 2109 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 19:26:24.573672 kubelet[2109]: I0209 19:26:24.573600 2109 policy_none.go:49] "None policy: Start" Feb 9 19:26:24.574363 kubelet[2109]: I0209 19:26:24.574322 2109 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:26:24.574363 kubelet[2109]: I0209 19:26:24.574344 2109 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:26:24.574769 kubelet[2109]: I0209 19:26:24.574752 2109 state_mem.go:75] "Updated machine memory state" Feb 9 19:26:24.575928 kubelet[2109]: I0209 19:26:24.575913 2109 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:26:24.575000 audit[2109]: AVC avc: denied { mac_admin } for pid=2109 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:24.575000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:26:24.575000 audit[2109]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0014afa70 a1=c0014b6438 a2=c0014afa40 a3=25 items=0 ppid=1 pid=2109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:24.575000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:26:24.576189 kubelet[2109]: I0209 19:26:24.575968 2109 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 9 19:26:24.577645 kubelet[2109]: I0209 19:26:24.577632 2109 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:26:24.657139 kubelet[2109]: I0209 19:26:24.657083 2109 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:26:24.657543 kubelet[2109]: I0209 19:26:24.657514 2109 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:26:24.657779 kubelet[2109]: I0209 19:26:24.657750 2109 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:26:24.684727 kubelet[2109]: I0209 19:26:24.684601 2109 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:26:24.687728 kubelet[2109]: I0209 19:26:24.687698 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/229e589d9a73b3f91e52ba60ae6ecba1-ca-certs\") pod \"kube-controller-manager-ci-3510-3-2-b-76a749f546.novalocal\" (UID: \"229e589d9a73b3f91e52ba60ae6ecba1\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:26:24.688089 kubelet[2109]: I0209 19:26:24.688062 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/229e589d9a73b3f91e52ba60ae6ecba1-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-2-b-76a749f546.novalocal\" (UID: \"229e589d9a73b3f91e52ba60ae6ecba1\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:26:24.688357 kubelet[2109]: I0209 19:26:24.688331 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/229e589d9a73b3f91e52ba60ae6ecba1-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-2-b-76a749f546.novalocal\" (UID: \"229e589d9a73b3f91e52ba60ae6ecba1\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:26:24.688651 kubelet[2109]: I0209 19:26:24.688626 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/229e589d9a73b3f91e52ba60ae6ecba1-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-2-b-76a749f546.novalocal\" (UID: \"229e589d9a73b3f91e52ba60ae6ecba1\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:26:24.689047 kubelet[2109]: I0209 19:26:24.689023 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/229e589d9a73b3f91e52ba60ae6ecba1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-2-b-76a749f546.novalocal\" (UID: \"229e589d9a73b3f91e52ba60ae6ecba1\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:26:24.689372 kubelet[2109]: I0209 19:26:24.689346 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/540a4d9839dd742a7133bf7cb31353af-kubeconfig\") pod \"kube-scheduler-ci-3510-3-2-b-76a749f546.novalocal\" (UID: \"540a4d9839dd742a7133bf7cb31353af\") " pod="kube-system/kube-scheduler-ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:26:24.690161 kubelet[2109]: E0209 19:26:24.690097 2109 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510-3-2-b-76a749f546.novalocal\" already exists" pod="kube-system/kube-controller-manager-ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:26:24.790389 kubelet[2109]: I0209 19:26:24.790301 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aef650e3899f4d42da56ac5fc9eb41cc-k8s-certs\") pod \"kube-apiserver-ci-3510-3-2-b-76a749f546.novalocal\" (UID: \"aef650e3899f4d42da56ac5fc9eb41cc\") " pod="kube-system/kube-apiserver-ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:26:24.790734 kubelet[2109]: I0209 19:26:24.790722 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aef650e3899f4d42da56ac5fc9eb41cc-ca-certs\") pod \"kube-apiserver-ci-3510-3-2-b-76a749f546.novalocal\" (UID: \"aef650e3899f4d42da56ac5fc9eb41cc\") " pod="kube-system/kube-apiserver-ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:26:24.790877 kubelet[2109]: I0209 19:26:24.790867 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aef650e3899f4d42da56ac5fc9eb41cc-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-2-b-76a749f546.novalocal\" (UID: \"aef650e3899f4d42da56ac5fc9eb41cc\") " pod="kube-system/kube-apiserver-ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:26:25.245566 kubelet[2109]: I0209 19:26:25.245516 2109 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:26:25.245807 kubelet[2109]: I0209 19:26:25.245678 2109 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:26:25.445189 kubelet[2109]: I0209 19:26:25.445080 2109 apiserver.go:52] "Watching apiserver" Feb 9 19:26:25.486540 kubelet[2109]: I0209 19:26:25.486496 2109 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:26:25.499051 kubelet[2109]: I0209 19:26:25.496585 2109 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:26:25.843539 kubelet[2109]: E0209 19:26:25.843457 2109 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510-3-2-b-76a749f546.novalocal\" already exists" pod="kube-system/kube-controller-manager-ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:26:26.048241 kubelet[2109]: E0209 19:26:26.048173 2109 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510-3-2-b-76a749f546.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:26:26.251139 kubelet[2109]: E0209 19:26:26.251116 2109 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510-3-2-b-76a749f546.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:26:26.457702 kubelet[2109]: I0209 19:26:26.457657 2109 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-2-b-76a749f546.novalocal" podStartSLOduration=4.456635623 pod.CreationTimestamp="2024-02-09 19:26:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:26:26.456350116 +0000 UTC m=+2.147760268" watchObservedRunningTime="2024-02-09 19:26:26.456635623 +0000 UTC m=+2.148045766" Feb 9 19:26:26.843781 kubelet[2109]: I0209 19:26:26.843750 2109 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-2-b-76a749f546.novalocal" podStartSLOduration=2.8436939519999997 pod.CreationTimestamp="2024-02-09 19:26:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:26:26.843130672 +0000 UTC m=+2.534540834" watchObservedRunningTime="2024-02-09 19:26:26.843693952 +0000 UTC m=+2.535104094" Feb 9 19:26:27.644680 kubelet[2109]: I0209 19:26:27.644651 2109 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-2-b-76a749f546.novalocal" podStartSLOduration=3.644612651 pod.CreationTimestamp="2024-02-09 19:26:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:26:27.247359863 +0000 UTC m=+2.938770026" watchObservedRunningTime="2024-02-09 19:26:27.644612651 +0000 UTC m=+3.336022813" Feb 9 19:26:29.717372 sudo[1303]: pam_unix(sudo:session): session closed for user root Feb 9 19:26:29.716000 audit[1303]: USER_END pid=1303 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:26:29.719933 kernel: kauditd_printk_skb: 4 callbacks suppressed Feb 9 19:26:29.720028 kernel: audit: type=1106 audit(1707506789.716:231): pid=1303 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:26:29.716000 audit[1303]: CRED_DISP pid=1303 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:26:29.739207 kernel: audit: type=1104 audit(1707506789.716:232): pid=1303 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:26:30.000784 sshd[1297]: pam_unix(sshd:session): session closed for user core Feb 9 19:26:30.006000 audit[1297]: USER_END pid=1297 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:26:30.016361 systemd[1]: sshd@6-172.24.4.217:22-172.24.4.1:55282.service: Deactivated successfully. Feb 9 19:26:30.006000 audit[1297]: CRED_DISP pid=1297 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:26:30.018212 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 19:26:30.022675 systemd-logind[1122]: Session 7 logged out. Waiting for processes to exit. Feb 9 19:26:30.029450 kernel: audit: type=1106 audit(1707506790.006:233): pid=1297 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:26:30.029641 kernel: audit: type=1104 audit(1707506790.006:234): pid=1297 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:26:30.029714 kernel: audit: type=1131 audit(1707506790.016:235): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.24.4.217:22-172.24.4.1:55282 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:30.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.24.4.217:22-172.24.4.1:55282 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:30.030318 systemd-logind[1122]: Removed session 7. Feb 9 19:26:37.740239 kubelet[2109]: I0209 19:26:37.740211 2109 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 19:26:37.740998 env[1135]: time="2024-02-09T19:26:37.740916841Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 19:26:37.741433 kubelet[2109]: I0209 19:26:37.741421 2109 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 19:26:38.577258 kubelet[2109]: I0209 19:26:38.577224 2109 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:26:38.686418 kubelet[2109]: I0209 19:26:38.686393 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bnqt\" (UniqueName: \"kubernetes.io/projected/1cc977d6-edb1-494d-837f-8e999b5739e3-kube-api-access-7bnqt\") pod \"kube-proxy-snwtm\" (UID: \"1cc977d6-edb1-494d-837f-8e999b5739e3\") " pod="kube-system/kube-proxy-snwtm" Feb 9 19:26:38.686715 kubelet[2109]: I0209 19:26:38.686699 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1cc977d6-edb1-494d-837f-8e999b5739e3-xtables-lock\") pod \"kube-proxy-snwtm\" (UID: \"1cc977d6-edb1-494d-837f-8e999b5739e3\") " pod="kube-system/kube-proxy-snwtm" Feb 9 19:26:38.686855 kubelet[2109]: I0209 19:26:38.686844 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1cc977d6-edb1-494d-837f-8e999b5739e3-lib-modules\") pod \"kube-proxy-snwtm\" (UID: \"1cc977d6-edb1-494d-837f-8e999b5739e3\") " pod="kube-system/kube-proxy-snwtm" Feb 9 19:26:38.687125 kubelet[2109]: I0209 19:26:38.687108 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1cc977d6-edb1-494d-837f-8e999b5739e3-kube-proxy\") pod \"kube-proxy-snwtm\" (UID: \"1cc977d6-edb1-494d-837f-8e999b5739e3\") " pod="kube-system/kube-proxy-snwtm" Feb 9 19:26:38.729117 kubelet[2109]: I0209 19:26:38.729051 2109 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:26:38.886962 env[1135]: time="2024-02-09T19:26:38.886490924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-snwtm,Uid:1cc977d6-edb1-494d-837f-8e999b5739e3,Namespace:kube-system,Attempt:0,}" Feb 9 19:26:38.888096 kubelet[2109]: I0209 19:26:38.888039 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt7tq\" (UniqueName: \"kubernetes.io/projected/4324ffe3-5bf3-4050-97d5-5a0567bf745a-kube-api-access-tt7tq\") pod \"tigera-operator-cfc98749c-66px2\" (UID: \"4324ffe3-5bf3-4050-97d5-5a0567bf745a\") " pod="tigera-operator/tigera-operator-cfc98749c-66px2" Feb 9 19:26:38.888377 kubelet[2109]: I0209 19:26:38.888164 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4324ffe3-5bf3-4050-97d5-5a0567bf745a-var-lib-calico\") pod \"tigera-operator-cfc98749c-66px2\" (UID: \"4324ffe3-5bf3-4050-97d5-5a0567bf745a\") " pod="tigera-operator/tigera-operator-cfc98749c-66px2" Feb 9 19:26:38.920216 env[1135]: time="2024-02-09T19:26:38.920086298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:26:38.920216 env[1135]: time="2024-02-09T19:26:38.920172410Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:26:38.920216 env[1135]: time="2024-02-09T19:26:38.920185935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:26:38.920879 env[1135]: time="2024-02-09T19:26:38.920790560Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ba94fe9b23faffa55565c0a3297d507f75ab661acc6f0f2671d6224a8cbbb06f pid=2213 runtime=io.containerd.runc.v2 Feb 9 19:26:38.979278 env[1135]: time="2024-02-09T19:26:38.979115109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-snwtm,Uid:1cc977d6-edb1-494d-837f-8e999b5739e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba94fe9b23faffa55565c0a3297d507f75ab661acc6f0f2671d6224a8cbbb06f\"" Feb 9 19:26:38.986517 env[1135]: time="2024-02-09T19:26:38.986483695Z" level=info msg="CreateContainer within sandbox \"ba94fe9b23faffa55565c0a3297d507f75ab661acc6f0f2671d6224a8cbbb06f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 19:26:39.019019 env[1135]: time="2024-02-09T19:26:39.018978410Z" level=info msg="CreateContainer within sandbox \"ba94fe9b23faffa55565c0a3297d507f75ab661acc6f0f2671d6224a8cbbb06f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"daebff687daae60d7cd4f8c7ab28bad6dcad5fe6eb641d198784c2912843a68e\"" Feb 9 19:26:39.021462 env[1135]: time="2024-02-09T19:26:39.020112620Z" level=info msg="StartContainer for \"daebff687daae60d7cd4f8c7ab28bad6dcad5fe6eb641d198784c2912843a68e\"" Feb 9 19:26:39.041954 env[1135]: time="2024-02-09T19:26:39.041408475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-cfc98749c-66px2,Uid:4324ffe3-5bf3-4050-97d5-5a0567bf745a,Namespace:tigera-operator,Attempt:0,}" Feb 9 19:26:39.057590 env[1135]: time="2024-02-09T19:26:39.057530045Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:26:39.057794 env[1135]: time="2024-02-09T19:26:39.057571012Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:26:39.057794 env[1135]: time="2024-02-09T19:26:39.057584127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:26:39.057939 env[1135]: time="2024-02-09T19:26:39.057839195Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/57046517c3bc317aff58ddbe64f8a7baa5c557f799d36105ae5a4c6480515110 pid=2282 runtime=io.containerd.runc.v2 Feb 9 19:26:39.104460 env[1135]: time="2024-02-09T19:26:39.104419786Z" level=info msg="StartContainer for \"daebff687daae60d7cd4f8c7ab28bad6dcad5fe6eb641d198784c2912843a68e\" returns successfully" Feb 9 19:26:39.125869 env[1135]: time="2024-02-09T19:26:39.125789970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-cfc98749c-66px2,Uid:4324ffe3-5bf3-4050-97d5-5a0567bf745a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"57046517c3bc317aff58ddbe64f8a7baa5c557f799d36105ae5a4c6480515110\"" Feb 9 19:26:39.130680 env[1135]: time="2024-02-09T19:26:39.129040834Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.32.3\"" Feb 9 19:26:39.625893 kubelet[2109]: I0209 19:26:39.625827 2109 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-snwtm" podStartSLOduration=1.625751303 pod.CreationTimestamp="2024-02-09 19:26:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:26:39.625460567 +0000 UTC m=+15.316870789" watchObservedRunningTime="2024-02-09 19:26:39.625751303 +0000 UTC m=+15.317161525" Feb 9 19:26:39.673000 audit[2349]: NETFILTER_CFG table=mangle:59 family=2 entries=1 op=nft_register_chain pid=2349 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:39.673000 audit[2349]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc9ee16fa0 a2=0 a3=7ffc9ee16f8c items=0 ppid=2269 pid=2349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.694628 kernel: audit: type=1325 audit(1707506799.673:236): table=mangle:59 family=2 entries=1 op=nft_register_chain pid=2349 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:39.694756 kernel: audit: type=1300 audit(1707506799.673:236): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc9ee16fa0 a2=0 a3=7ffc9ee16f8c items=0 ppid=2269 pid=2349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.673000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 19:26:39.702959 kernel: audit: type=1327 audit(1707506799.673:236): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 19:26:39.686000 audit[2350]: NETFILTER_CFG table=mangle:60 family=10 entries=1 op=nft_register_chain pid=2350 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:39.713687 kernel: audit: type=1325 audit(1707506799.686:237): table=mangle:60 family=10 entries=1 op=nft_register_chain pid=2350 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:39.686000 audit[2350]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe8cfc4a40 a2=0 a3=7ffe8cfc4a2c items=0 ppid=2269 pid=2350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.686000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 19:26:39.723381 kernel: audit: type=1300 audit(1707506799.686:237): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe8cfc4a40 a2=0 a3=7ffe8cfc4a2c items=0 ppid=2269 pid=2350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.723450 kernel: audit: type=1327 audit(1707506799.686:237): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 19:26:39.723480 kernel: audit: type=1325 audit(1707506799.700:238): table=nat:61 family=2 entries=1 op=nft_register_chain pid=2351 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:39.700000 audit[2351]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2351 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:39.700000 audit[2351]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff9daf8120 a2=0 a3=7fff9daf810c items=0 ppid=2269 pid=2351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.731568 kernel: audit: type=1300 audit(1707506799.700:238): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff9daf8120 a2=0 a3=7fff9daf810c items=0 ppid=2269 pid=2351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.700000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 19:26:39.704000 audit[2352]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_chain pid=2352 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:39.742740 kernel: audit: type=1327 audit(1707506799.700:238): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 19:26:39.742804 kernel: audit: type=1325 audit(1707506799.704:239): table=filter:62 family=2 entries=1 op=nft_register_chain pid=2352 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:39.704000 audit[2352]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffda027d3a0 a2=0 a3=7ffda027d38c items=0 ppid=2269 pid=2352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.704000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 9 19:26:39.707000 audit[2353]: NETFILTER_CFG table=nat:63 family=10 entries=1 op=nft_register_chain pid=2353 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:39.707000 audit[2353]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffda426f3c0 a2=0 a3=7ffda426f3ac items=0 ppid=2269 pid=2353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.707000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 19:26:39.708000 audit[2354]: NETFILTER_CFG table=filter:64 family=10 entries=1 op=nft_register_chain pid=2354 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:39.708000 audit[2354]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffca77e34d0 a2=0 a3=7ffca77e34bc items=0 ppid=2269 pid=2354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.708000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 9 19:26:39.799000 audit[2355]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=2355 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:39.799000 audit[2355]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd9198aa60 a2=0 a3=7ffd9198aa4c items=0 ppid=2269 pid=2355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.799000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 9 19:26:39.806000 audit[2357]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=2357 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:39.806000 audit[2357]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc6737f380 a2=0 a3=7ffc6737f36c items=0 ppid=2269 pid=2357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.806000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Feb 9 19:26:39.830000 audit[2360]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=2360 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:39.830000 audit[2360]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffeaec40580 a2=0 a3=7ffeaec4056c items=0 ppid=2269 pid=2360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.830000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Feb 9 19:26:39.833000 audit[2361]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=2361 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:39.833000 audit[2361]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff853b1a00 a2=0 a3=7fff853b19ec items=0 ppid=2269 pid=2361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.833000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 9 19:26:39.840000 audit[2363]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=2363 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:39.840000 audit[2363]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffeeb51b070 a2=0 a3=7ffeeb51b05c items=0 ppid=2269 pid=2363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.840000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 9 19:26:39.844000 audit[2364]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=2364 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:39.844000 audit[2364]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe4454e8e0 a2=0 a3=7ffe4454e8cc items=0 ppid=2269 pid=2364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.844000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 9 19:26:39.846000 audit[2366]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=2366 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:39.846000 audit[2366]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffffb86a420 a2=0 a3=7ffffb86a40c items=0 ppid=2269 pid=2366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.846000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 9 19:26:39.851000 audit[2369]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=2369 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:39.851000 audit[2369]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc64054440 a2=0 a3=7ffc6405442c items=0 ppid=2269 pid=2369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.851000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Feb 9 19:26:39.852000 audit[2370]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_chain pid=2370 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:39.852000 audit[2370]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff6f826e60 a2=0 a3=7fff6f826e4c items=0 ppid=2269 pid=2370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.852000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 9 19:26:39.855000 audit[2372]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_rule pid=2372 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:39.855000 audit[2372]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffffb33ae50 a2=0 a3=7ffffb33ae3c items=0 ppid=2269 pid=2372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.855000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 9 19:26:39.856000 audit[2373]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_chain pid=2373 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:39.856000 audit[2373]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd1dae2590 a2=0 a3=7ffd1dae257c items=0 ppid=2269 pid=2373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.856000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 9 19:26:39.859000 audit[2375]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_rule pid=2375 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:39.859000 audit[2375]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffc25da830 a2=0 a3=7fffc25da81c items=0 ppid=2269 pid=2375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.859000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 19:26:39.862000 audit[2378]: NETFILTER_CFG table=filter:77 family=2 entries=1 op=nft_register_rule pid=2378 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:39.862000 audit[2378]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff354a7270 a2=0 a3=7fff354a725c items=0 ppid=2269 pid=2378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.862000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 19:26:39.866000 audit[2381]: NETFILTER_CFG table=filter:78 family=2 entries=1 op=nft_register_rule pid=2381 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:39.866000 audit[2381]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdddbfd9a0 a2=0 a3=7ffdddbfd98c items=0 ppid=2269 pid=2381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.866000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 9 19:26:39.867000 audit[2382]: NETFILTER_CFG table=nat:79 family=2 entries=1 op=nft_register_chain pid=2382 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:39.867000 audit[2382]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff7c63d3c0 a2=0 a3=7fff7c63d3ac items=0 ppid=2269 pid=2382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.867000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 9 19:26:39.871000 audit[2384]: NETFILTER_CFG table=nat:80 family=2 entries=1 op=nft_register_rule pid=2384 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:39.871000 audit[2384]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff752c9b20 a2=0 a3=7fff752c9b0c items=0 ppid=2269 pid=2384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.871000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 19:26:39.876000 audit[2387]: NETFILTER_CFG table=nat:81 family=2 entries=1 op=nft_register_rule pid=2387 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:39.876000 audit[2387]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe92ab05d0 a2=0 a3=7ffe92ab05bc items=0 ppid=2269 pid=2387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.876000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 19:26:39.899000 audit[2391]: NETFILTER_CFG table=filter:82 family=2 entries=6 op=nft_register_rule pid=2391 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:39.899000 audit[2391]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7fff09bf3ec0 a2=0 a3=7fff09bf3eac items=0 ppid=2269 pid=2391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.899000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:39.907000 audit[2391]: NETFILTER_CFG table=nat:83 family=2 entries=17 op=nft_register_chain pid=2391 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:39.907000 audit[2391]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7fff09bf3ec0 a2=0 a3=7fff09bf3eac items=0 ppid=2269 pid=2391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.907000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:39.912000 audit[2396]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2396 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:39.912000 audit[2396]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fffcb213d10 a2=0 a3=7fffcb213cfc items=0 ppid=2269 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.912000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 9 19:26:39.916000 audit[2398]: NETFILTER_CFG table=filter:85 family=10 entries=2 op=nft_register_chain pid=2398 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:39.916000 audit[2398]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffcf4c9cb70 a2=0 a3=7ffcf4c9cb5c items=0 ppid=2269 pid=2398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.916000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Feb 9 19:26:39.922000 audit[2401]: NETFILTER_CFG table=filter:86 family=10 entries=2 op=nft_register_chain pid=2401 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:39.922000 audit[2401]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc5ede2bd0 a2=0 a3=7ffc5ede2bbc items=0 ppid=2269 pid=2401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.922000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Feb 9 19:26:39.924000 audit[2402]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=2402 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:39.924000 audit[2402]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdbd13e290 a2=0 a3=7ffdbd13e27c items=0 ppid=2269 pid=2402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.924000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 9 19:26:39.928000 audit[2404]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=2404 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:39.928000 audit[2404]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc9dd06ad0 a2=0 a3=7ffc9dd06abc items=0 ppid=2269 pid=2404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.928000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 9 19:26:39.930000 audit[2405]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=2405 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:39.930000 audit[2405]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe40f73400 a2=0 a3=7ffe40f733ec items=0 ppid=2269 pid=2405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.930000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 9 19:26:39.933000 audit[2407]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=2407 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:39.933000 audit[2407]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcc97cb830 a2=0 a3=7ffcc97cb81c items=0 ppid=2269 pid=2407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.933000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Feb 9 19:26:39.940000 audit[2410]: NETFILTER_CFG table=filter:91 family=10 entries=2 op=nft_register_chain pid=2410 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:39.940000 audit[2410]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffeb1595310 a2=0 a3=7ffeb15952fc items=0 ppid=2269 pid=2410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.940000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 9 19:26:39.942000 audit[2411]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_chain pid=2411 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:39.942000 audit[2411]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdead22770 a2=0 a3=7ffdead2275c items=0 ppid=2269 pid=2411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.942000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 9 19:26:39.946000 audit[2413]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=2413 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:39.946000 audit[2413]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff999796e0 a2=0 a3=7fff999796cc items=0 ppid=2269 pid=2413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.946000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 9 19:26:39.949000 audit[2414]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_chain pid=2414 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:39.949000 audit[2414]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc11299b00 a2=0 a3=7ffc11299aec items=0 ppid=2269 pid=2414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.949000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 9 19:26:39.955000 audit[2416]: NETFILTER_CFG table=filter:95 family=10 entries=1 op=nft_register_rule pid=2416 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:39.955000 audit[2416]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe1e5aeda0 a2=0 a3=7ffe1e5aed8c items=0 ppid=2269 pid=2416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.955000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 19:26:39.966000 audit[2419]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_rule pid=2419 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:39.966000 audit[2419]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdc5a007f0 a2=0 a3=7ffdc5a007dc items=0 ppid=2269 pid=2419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.966000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 9 19:26:39.974000 audit[2422]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_rule pid=2422 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:39.974000 audit[2422]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe87c081c0 a2=0 a3=7ffe87c081ac items=0 ppid=2269 pid=2422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.974000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Feb 9 19:26:39.976000 audit[2423]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=2423 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:39.976000 audit[2423]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe50ad5740 a2=0 a3=7ffe50ad572c items=0 ppid=2269 pid=2423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.976000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 9 19:26:39.982000 audit[2425]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=2425 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:39.982000 audit[2425]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fffaf50e7c0 a2=0 a3=7fffaf50e7ac items=0 ppid=2269 pid=2425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.982000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 19:26:39.986000 audit[2428]: NETFILTER_CFG table=nat:100 family=10 entries=2 op=nft_register_chain pid=2428 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:39.986000 audit[2428]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffe729ba1c0 a2=0 a3=7ffe729ba1ac items=0 ppid=2269 pid=2428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.986000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 19:26:39.999000 audit[2432]: NETFILTER_CFG table=filter:101 family=10 entries=3 op=nft_register_rule pid=2432 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 9 19:26:39.999000 audit[2432]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7fffe539db50 a2=0 a3=7fffe539db3c items=0 ppid=2269 pid=2432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.999000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:40.001000 audit[2432]: NETFILTER_CFG table=nat:102 family=10 entries=10 op=nft_register_chain pid=2432 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 9 19:26:40.001000 audit[2432]: SYSCALL arch=c000003e syscall=46 success=yes exit=1968 a0=3 a1=7fffe539db50 a2=0 a3=7fffe539db3c items=0 ppid=2269 pid=2432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:40.001000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:40.598850 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4034384482.mount: Deactivated successfully. Feb 9 19:26:42.634815 env[1135]: time="2024-02-09T19:26:42.634747242Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:42.637918 env[1135]: time="2024-02-09T19:26:42.637873642Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7bc79e0d3be4fa8c35133127424f9b1ec775af43145b7dd58637905c76084827,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:42.639826 env[1135]: time="2024-02-09T19:26:42.639782174Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:42.642647 env[1135]: time="2024-02-09T19:26:42.642612187Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:715ac9a30f8a9579e44258af20de354715429e11836b493918e9e1a696e9b028,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:42.643016 env[1135]: time="2024-02-09T19:26:42.642986049Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.32.3\" returns image reference \"sha256:7bc79e0d3be4fa8c35133127424f9b1ec775af43145b7dd58637905c76084827\"" Feb 9 19:26:42.646330 env[1135]: time="2024-02-09T19:26:42.646287847Z" level=info msg="CreateContainer within sandbox \"57046517c3bc317aff58ddbe64f8a7baa5c557f799d36105ae5a4c6480515110\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 9 19:26:42.674137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2295999670.mount: Deactivated successfully. Feb 9 19:26:42.683337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount900883861.mount: Deactivated successfully. Feb 9 19:26:42.684092 env[1135]: time="2024-02-09T19:26:42.683947383Z" level=info msg="CreateContainer within sandbox \"57046517c3bc317aff58ddbe64f8a7baa5c557f799d36105ae5a4c6480515110\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"aba31736d2b28253ba497e5ad19d9fbc791a43ff71f82437bf9afe76eb5ea16a\"" Feb 9 19:26:42.684949 env[1135]: time="2024-02-09T19:26:42.684918266Z" level=info msg="StartContainer for \"aba31736d2b28253ba497e5ad19d9fbc791a43ff71f82437bf9afe76eb5ea16a\"" Feb 9 19:26:42.768203 env[1135]: time="2024-02-09T19:26:42.768156303Z" level=info msg="StartContainer for \"aba31736d2b28253ba497e5ad19d9fbc791a43ff71f82437bf9afe76eb5ea16a\" returns successfully" Feb 9 19:26:44.777000 audit[2495]: NETFILTER_CFG table=filter:103 family=2 entries=13 op=nft_register_rule pid=2495 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:44.779798 kernel: kauditd_printk_skb: 122 callbacks suppressed Feb 9 19:26:44.779877 kernel: audit: type=1325 audit(1707506804.777:280): table=filter:103 family=2 entries=13 op=nft_register_rule pid=2495 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:44.777000 audit[2495]: SYSCALL arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffd2a73fcd0 a2=0 a3=7ffd2a73fcbc items=0 ppid=2269 pid=2495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:44.777000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:44.791922 kernel: audit: type=1300 audit(1707506804.777:280): arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffd2a73fcd0 a2=0 a3=7ffd2a73fcbc items=0 ppid=2269 pid=2495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:44.792016 kernel: audit: type=1327 audit(1707506804.777:280): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:44.792041 kernel: audit: type=1325 audit(1707506804.788:281): table=nat:104 family=2 entries=20 op=nft_register_rule pid=2495 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:44.788000 audit[2495]: NETFILTER_CFG table=nat:104 family=2 entries=20 op=nft_register_rule pid=2495 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:44.788000 audit[2495]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffd2a73fcd0 a2=0 a3=7ffd2a73fcbc items=0 ppid=2269 pid=2495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:44.788000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:44.804311 kernel: audit: type=1300 audit(1707506804.788:281): arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffd2a73fcd0 a2=0 a3=7ffd2a73fcbc items=0 ppid=2269 pid=2495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:44.804395 kernel: audit: type=1327 audit(1707506804.788:281): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:44.837000 audit[2521]: NETFILTER_CFG table=filter:105 family=2 entries=14 op=nft_register_rule pid=2521 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:44.837000 audit[2521]: SYSCALL arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffec57afbf0 a2=0 a3=7ffec57afbdc items=0 ppid=2269 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:44.847485 kernel: audit: type=1325 audit(1707506804.837:282): table=filter:105 family=2 entries=14 op=nft_register_rule pid=2521 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:44.847574 kernel: audit: type=1300 audit(1707506804.837:282): arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffec57afbf0 a2=0 a3=7ffec57afbdc items=0 ppid=2269 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:44.837000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:44.851927 kernel: audit: type=1327 audit(1707506804.837:282): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:44.856000 audit[2521]: NETFILTER_CFG table=nat:106 family=2 entries=20 op=nft_register_rule pid=2521 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:44.856000 audit[2521]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffec57afbf0 a2=0 a3=7ffec57afbdc items=0 ppid=2269 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:44.861034 kernel: audit: type=1325 audit(1707506804.856:283): table=nat:106 family=2 entries=20 op=nft_register_rule pid=2521 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:44.856000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:44.896306 kubelet[2109]: I0209 19:26:44.896260 2109 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-cfc98749c-66px2" podStartSLOduration=-9.223372029958557e+09 pod.CreationTimestamp="2024-02-09 19:26:38 +0000 UTC" firstStartedPulling="2024-02-09 19:26:39.127466549 +0000 UTC m=+14.818876691" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:26:43.624926668 +0000 UTC m=+19.316336830" watchObservedRunningTime="2024-02-09 19:26:44.896219459 +0000 UTC m=+20.587629621" Feb 9 19:26:44.896810 kubelet[2109]: I0209 19:26:44.896466 2109 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:26:45.012147 kubelet[2109]: I0209 19:26:45.012106 2109 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:26:45.031051 kubelet[2109]: I0209 19:26:45.030889 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa234e83-45ab-4e9a-9ee9-e75c55c049d4-tigera-ca-bundle\") pod \"calico-typha-864947df6c-7kq64\" (UID: \"fa234e83-45ab-4e9a-9ee9-e75c55c049d4\") " pod="calico-system/calico-typha-864947df6c-7kq64" Feb 9 19:26:45.031323 kubelet[2109]: I0209 19:26:45.031301 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zppg\" (UniqueName: \"kubernetes.io/projected/fa234e83-45ab-4e9a-9ee9-e75c55c049d4-kube-api-access-5zppg\") pod \"calico-typha-864947df6c-7kq64\" (UID: \"fa234e83-45ab-4e9a-9ee9-e75c55c049d4\") " pod="calico-system/calico-typha-864947df6c-7kq64" Feb 9 19:26:45.031455 kubelet[2109]: I0209 19:26:45.031442 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/fa234e83-45ab-4e9a-9ee9-e75c55c049d4-typha-certs\") pod \"calico-typha-864947df6c-7kq64\" (UID: \"fa234e83-45ab-4e9a-9ee9-e75c55c049d4\") " pod="calico-system/calico-typha-864947df6c-7kq64" Feb 9 19:26:45.127353 kubelet[2109]: I0209 19:26:45.127318 2109 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:26:45.127864 kubelet[2109]: E0209 19:26:45.127843 2109 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x2vsr" podUID=258b5f6f-f507-494e-8282-83a91907d3f5 Feb 9 19:26:45.132434 kubelet[2109]: I0209 19:26:45.132400 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/27a8874a-f11c-45a2-a9d3-c239f4fdac31-cni-bin-dir\") pod \"calico-node-bhbt6\" (UID: \"27a8874a-f11c-45a2-a9d3-c239f4fdac31\") " pod="calico-system/calico-node-bhbt6" Feb 9 19:26:45.132761 kubelet[2109]: I0209 19:26:45.132737 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/27a8874a-f11c-45a2-a9d3-c239f4fdac31-flexvol-driver-host\") pod \"calico-node-bhbt6\" (UID: \"27a8874a-f11c-45a2-a9d3-c239f4fdac31\") " pod="calico-system/calico-node-bhbt6" Feb 9 19:26:45.132939 kubelet[2109]: I0209 19:26:45.132924 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/27a8874a-f11c-45a2-a9d3-c239f4fdac31-lib-modules\") pod \"calico-node-bhbt6\" (UID: \"27a8874a-f11c-45a2-a9d3-c239f4fdac31\") " pod="calico-system/calico-node-bhbt6" Feb 9 19:26:45.133081 kubelet[2109]: I0209 19:26:45.133066 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/27a8874a-f11c-45a2-a9d3-c239f4fdac31-xtables-lock\") pod \"calico-node-bhbt6\" (UID: \"27a8874a-f11c-45a2-a9d3-c239f4fdac31\") " pod="calico-system/calico-node-bhbt6" Feb 9 19:26:45.133210 kubelet[2109]: I0209 19:26:45.133197 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/27a8874a-f11c-45a2-a9d3-c239f4fdac31-cni-log-dir\") pod \"calico-node-bhbt6\" (UID: \"27a8874a-f11c-45a2-a9d3-c239f4fdac31\") " pod="calico-system/calico-node-bhbt6" Feb 9 19:26:45.133393 kubelet[2109]: I0209 19:26:45.133377 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/27a8874a-f11c-45a2-a9d3-c239f4fdac31-policysync\") pod \"calico-node-bhbt6\" (UID: \"27a8874a-f11c-45a2-a9d3-c239f4fdac31\") " pod="calico-system/calico-node-bhbt6" Feb 9 19:26:45.133512 kubelet[2109]: I0209 19:26:45.133500 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/27a8874a-f11c-45a2-a9d3-c239f4fdac31-node-certs\") pod \"calico-node-bhbt6\" (UID: \"27a8874a-f11c-45a2-a9d3-c239f4fdac31\") " pod="calico-system/calico-node-bhbt6" Feb 9 19:26:45.133629 kubelet[2109]: I0209 19:26:45.133616 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/27a8874a-f11c-45a2-a9d3-c239f4fdac31-cni-net-dir\") pod \"calico-node-bhbt6\" (UID: \"27a8874a-f11c-45a2-a9d3-c239f4fdac31\") " pod="calico-system/calico-node-bhbt6" Feb 9 19:26:45.133796 kubelet[2109]: I0209 19:26:45.133760 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrpqb\" (UniqueName: \"kubernetes.io/projected/27a8874a-f11c-45a2-a9d3-c239f4fdac31-kube-api-access-lrpqb\") pod \"calico-node-bhbt6\" (UID: \"27a8874a-f11c-45a2-a9d3-c239f4fdac31\") " pod="calico-system/calico-node-bhbt6" Feb 9 19:26:45.133977 kubelet[2109]: I0209 19:26:45.133961 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27a8874a-f11c-45a2-a9d3-c239f4fdac31-tigera-ca-bundle\") pod \"calico-node-bhbt6\" (UID: \"27a8874a-f11c-45a2-a9d3-c239f4fdac31\") " pod="calico-system/calico-node-bhbt6" Feb 9 19:26:45.134247 kubelet[2109]: I0209 19:26:45.134232 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/27a8874a-f11c-45a2-a9d3-c239f4fdac31-var-run-calico\") pod \"calico-node-bhbt6\" (UID: \"27a8874a-f11c-45a2-a9d3-c239f4fdac31\") " pod="calico-system/calico-node-bhbt6" Feb 9 19:26:45.134350 kubelet[2109]: I0209 19:26:45.134339 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/27a8874a-f11c-45a2-a9d3-c239f4fdac31-var-lib-calico\") pod \"calico-node-bhbt6\" (UID: \"27a8874a-f11c-45a2-a9d3-c239f4fdac31\") " pod="calico-system/calico-node-bhbt6" Feb 9 19:26:45.199818 env[1135]: time="2024-02-09T19:26:45.199747571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-864947df6c-7kq64,Uid:fa234e83-45ab-4e9a-9ee9-e75c55c049d4,Namespace:calico-system,Attempt:0,}" Feb 9 19:26:45.235160 kubelet[2109]: I0209 19:26:45.235126 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/258b5f6f-f507-494e-8282-83a91907d3f5-kubelet-dir\") pod \"csi-node-driver-x2vsr\" (UID: \"258b5f6f-f507-494e-8282-83a91907d3f5\") " pod="calico-system/csi-node-driver-x2vsr" Feb 9 19:26:45.235885 kubelet[2109]: I0209 19:26:45.235872 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/258b5f6f-f507-494e-8282-83a91907d3f5-varrun\") pod \"csi-node-driver-x2vsr\" (UID: \"258b5f6f-f507-494e-8282-83a91907d3f5\") " pod="calico-system/csi-node-driver-x2vsr" Feb 9 19:26:45.236588 kubelet[2109]: E0209 19:26:45.236575 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.236672 kubelet[2109]: W0209 19:26:45.236658 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.236749 kubelet[2109]: E0209 19:26:45.236739 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.237008 kubelet[2109]: E0209 19:26:45.236984 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.237088 kubelet[2109]: W0209 19:26:45.237076 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.237159 kubelet[2109]: E0209 19:26:45.237149 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.237354 kubelet[2109]: E0209 19:26:45.237344 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.237428 kubelet[2109]: W0209 19:26:45.237416 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.237495 kubelet[2109]: E0209 19:26:45.237486 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.237679 kubelet[2109]: E0209 19:26:45.237669 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.237753 kubelet[2109]: W0209 19:26:45.237742 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.237863 kubelet[2109]: E0209 19:26:45.237852 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.238131 kubelet[2109]: E0209 19:26:45.238120 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.238205 kubelet[2109]: W0209 19:26:45.238194 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.238274 kubelet[2109]: E0209 19:26:45.238264 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.238467 kubelet[2109]: E0209 19:26:45.238457 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.238540 kubelet[2109]: W0209 19:26:45.238528 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.238607 kubelet[2109]: E0209 19:26:45.238598 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.238799 kubelet[2109]: E0209 19:26:45.238788 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.242681 kubelet[2109]: W0209 19:26:45.242664 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.242758 kubelet[2109]: E0209 19:26:45.242748 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.243065 kubelet[2109]: E0209 19:26:45.243054 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.243185 kubelet[2109]: W0209 19:26:45.243130 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.243264 kubelet[2109]: E0209 19:26:45.243253 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.243481 kubelet[2109]: E0209 19:26:45.243471 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.243556 kubelet[2109]: W0209 19:26:45.243545 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.243625 kubelet[2109]: E0209 19:26:45.243616 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.243883 kubelet[2109]: E0209 19:26:45.243872 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.243989 kubelet[2109]: W0209 19:26:45.243977 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.244059 kubelet[2109]: E0209 19:26:45.244049 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.244144 kubelet[2109]: I0209 19:26:45.244134 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/258b5f6f-f507-494e-8282-83a91907d3f5-socket-dir\") pod \"csi-node-driver-x2vsr\" (UID: \"258b5f6f-f507-494e-8282-83a91907d3f5\") " pod="calico-system/csi-node-driver-x2vsr" Feb 9 19:26:45.244338 kubelet[2109]: E0209 19:26:45.244328 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.244411 kubelet[2109]: W0209 19:26:45.244398 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.244610 kubelet[2109]: E0209 19:26:45.244598 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.244699 kubelet[2109]: W0209 19:26:45.244685 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.245245 kubelet[2109]: E0209 19:26:45.244606 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.245409 kubelet[2109]: E0209 19:26:45.245342 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.245461 kubelet[2109]: W0209 19:26:45.245411 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.245519 kubelet[2109]: I0209 19:26:45.245397 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnvsj\" (UniqueName: \"kubernetes.io/projected/258b5f6f-f507-494e-8282-83a91907d3f5-kube-api-access-gnvsj\") pod \"csi-node-driver-x2vsr\" (UID: \"258b5f6f-f507-494e-8282-83a91907d3f5\") " pod="calico-system/csi-node-driver-x2vsr" Feb 9 19:26:45.245580 kubelet[2109]: E0209 19:26:45.245354 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.245654 kubelet[2109]: E0209 19:26:45.245644 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.245828 kubelet[2109]: E0209 19:26:45.245813 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.245828 kubelet[2109]: W0209 19:26:45.245826 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.246028 kubelet[2109]: E0209 19:26:45.245976 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.246105 kubelet[2109]: E0209 19:26:45.246057 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.246173 kubelet[2109]: W0209 19:26:45.246161 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.246263 kubelet[2109]: E0209 19:26:45.246243 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.246469 kubelet[2109]: E0209 19:26:45.246459 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.246541 kubelet[2109]: W0209 19:26:45.246529 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.246619 kubelet[2109]: E0209 19:26:45.246609 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.246871 kubelet[2109]: E0209 19:26:45.246856 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.246871 kubelet[2109]: W0209 19:26:45.246870 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.247005 kubelet[2109]: E0209 19:26:45.246914 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.247205 kubelet[2109]: E0209 19:26:45.247190 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.247205 kubelet[2109]: W0209 19:26:45.247204 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.247325 kubelet[2109]: E0209 19:26:45.247312 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.247442 kubelet[2109]: E0209 19:26:45.247427 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.247442 kubelet[2109]: W0209 19:26:45.247441 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.247561 kubelet[2109]: E0209 19:26:45.247549 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.247659 kubelet[2109]: E0209 19:26:45.247641 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.247659 kubelet[2109]: W0209 19:26:45.247657 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.247768 kubelet[2109]: E0209 19:26:45.247757 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.247853 kubelet[2109]: E0209 19:26:45.247833 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.247853 kubelet[2109]: W0209 19:26:45.247848 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.248056 kubelet[2109]: E0209 19:26:45.248043 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.248121 kubelet[2109]: E0209 19:26:45.248082 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.248234 kubelet[2109]: W0209 19:26:45.248221 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.248307 kubelet[2109]: E0209 19:26:45.248298 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.248528 kubelet[2109]: E0209 19:26:45.248517 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.248603 kubelet[2109]: W0209 19:26:45.248592 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.248684 kubelet[2109]: E0209 19:26:45.248673 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.248935 kubelet[2109]: E0209 19:26:45.248883 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.248935 kubelet[2109]: W0209 19:26:45.248916 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.249017 kubelet[2109]: E0209 19:26:45.248937 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.249653 kubelet[2109]: E0209 19:26:45.249642 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.249735 kubelet[2109]: W0209 19:26:45.249723 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.249805 kubelet[2109]: E0209 19:26:45.249797 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.250108 kubelet[2109]: E0209 19:26:45.250098 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.250187 kubelet[2109]: W0209 19:26:45.250175 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.250263 kubelet[2109]: E0209 19:26:45.250252 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.250435 kubelet[2109]: E0209 19:26:45.250419 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.250435 kubelet[2109]: W0209 19:26:45.250435 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.250524 kubelet[2109]: E0209 19:26:45.250457 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.250698 kubelet[2109]: E0209 19:26:45.250683 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.250698 kubelet[2109]: W0209 19:26:45.250696 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.250790 kubelet[2109]: E0209 19:26:45.250718 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.250890 kubelet[2109]: E0209 19:26:45.250875 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.250890 kubelet[2109]: W0209 19:26:45.250888 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.250890 kubelet[2109]: E0209 19:26:45.250944 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.251102 kubelet[2109]: E0209 19:26:45.251082 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.251102 kubelet[2109]: W0209 19:26:45.251098 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.251196 kubelet[2109]: E0209 19:26:45.251110 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.251249 kubelet[2109]: E0209 19:26:45.251231 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.251249 kubelet[2109]: W0209 19:26:45.251245 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.251319 kubelet[2109]: E0209 19:26:45.251257 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.251460 kubelet[2109]: E0209 19:26:45.251446 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.251460 kubelet[2109]: W0209 19:26:45.251459 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.251576 kubelet[2109]: E0209 19:26:45.251563 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.251649 kubelet[2109]: E0209 19:26:45.251585 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.251714 kubelet[2109]: W0209 19:26:45.251702 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.251779 kubelet[2109]: E0209 19:26:45.251771 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.252082 kubelet[2109]: E0209 19:26:45.252072 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.252157 kubelet[2109]: W0209 19:26:45.252146 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.252219 kubelet[2109]: E0209 19:26:45.252210 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.252302 kubelet[2109]: I0209 19:26:45.252292 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/258b5f6f-f507-494e-8282-83a91907d3f5-registration-dir\") pod \"csi-node-driver-x2vsr\" (UID: \"258b5f6f-f507-494e-8282-83a91907d3f5\") " pod="calico-system/csi-node-driver-x2vsr" Feb 9 19:26:45.252567 kubelet[2109]: E0209 19:26:45.252557 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.252640 kubelet[2109]: W0209 19:26:45.252628 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.252707 kubelet[2109]: E0209 19:26:45.252697 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.252962 kubelet[2109]: E0209 19:26:45.252948 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.253041 kubelet[2109]: W0209 19:26:45.253031 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.253103 kubelet[2109]: E0209 19:26:45.253095 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.354996 kubelet[2109]: E0209 19:26:45.354159 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.355289 kubelet[2109]: W0209 19:26:45.355244 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.355480 kubelet[2109]: E0209 19:26:45.355457 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.356099 kubelet[2109]: E0209 19:26:45.356074 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.356261 kubelet[2109]: W0209 19:26:45.356236 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.356428 kubelet[2109]: E0209 19:26:45.356406 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.357216 kubelet[2109]: E0209 19:26:45.357191 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.357404 kubelet[2109]: W0209 19:26:45.357377 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.357572 kubelet[2109]: E0209 19:26:45.357551 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.358094 kubelet[2109]: E0209 19:26:45.358068 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.358333 kubelet[2109]: W0209 19:26:45.358304 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.358491 kubelet[2109]: E0209 19:26:45.358470 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.359012 kubelet[2109]: E0209 19:26:45.358987 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.359191 kubelet[2109]: W0209 19:26:45.359164 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.359355 kubelet[2109]: E0209 19:26:45.359332 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.359814 kubelet[2109]: E0209 19:26:45.359791 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.360034 kubelet[2109]: W0209 19:26:45.360005 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.360452 kubelet[2109]: E0209 19:26:45.360429 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.360706 kubelet[2109]: W0209 19:26:45.360656 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.361213 kubelet[2109]: E0209 19:26:45.361190 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.361426 kubelet[2109]: W0209 19:26:45.361397 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.361619 kubelet[2109]: E0209 19:26:45.361596 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.362057 kubelet[2109]: E0209 19:26:45.361983 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.362188 kubelet[2109]: E0209 19:26:45.362156 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.362450 kubelet[2109]: E0209 19:26:45.362424 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.362636 kubelet[2109]: W0209 19:26:45.362609 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.362804 kubelet[2109]: E0209 19:26:45.362783 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.363356 kubelet[2109]: E0209 19:26:45.363333 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.363573 kubelet[2109]: W0209 19:26:45.363543 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.363761 kubelet[2109]: E0209 19:26:45.363737 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.364472 kubelet[2109]: E0209 19:26:45.364262 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.364472 kubelet[2109]: W0209 19:26:45.364294 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.364472 kubelet[2109]: E0209 19:26:45.364366 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.364752 kubelet[2109]: E0209 19:26:45.364648 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.364752 kubelet[2109]: W0209 19:26:45.364667 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.364752 kubelet[2109]: E0209 19:26:45.364697 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.365042 kubelet[2109]: E0209 19:26:45.364999 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.365042 kubelet[2109]: W0209 19:26:45.365018 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.365042 kubelet[2109]: E0209 19:26:45.365044 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.367808 kubelet[2109]: E0209 19:26:45.365361 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.367808 kubelet[2109]: W0209 19:26:45.365410 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.367808 kubelet[2109]: E0209 19:26:45.365438 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.367808 kubelet[2109]: E0209 19:26:45.365736 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.367808 kubelet[2109]: W0209 19:26:45.365762 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.367808 kubelet[2109]: E0209 19:26:45.365802 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.367808 kubelet[2109]: E0209 19:26:45.366137 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.367808 kubelet[2109]: W0209 19:26:45.366155 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.367808 kubelet[2109]: E0209 19:26:45.366181 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.367808 kubelet[2109]: E0209 19:26:45.366477 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.368599 kubelet[2109]: W0209 19:26:45.366496 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.368599 kubelet[2109]: E0209 19:26:45.366522 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.368599 kubelet[2109]: E0209 19:26:45.366933 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.368599 kubelet[2109]: W0209 19:26:45.366953 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.368599 kubelet[2109]: E0209 19:26:45.366980 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.368599 kubelet[2109]: E0209 19:26:45.367336 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.368599 kubelet[2109]: W0209 19:26:45.367356 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.368599 kubelet[2109]: E0209 19:26:45.367383 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.368599 kubelet[2109]: E0209 19:26:45.367634 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.368599 kubelet[2109]: W0209 19:26:45.367651 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.369273 kubelet[2109]: E0209 19:26:45.367676 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.369273 kubelet[2109]: E0209 19:26:45.368069 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.369273 kubelet[2109]: W0209 19:26:45.368095 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.369273 kubelet[2109]: E0209 19:26:45.368128 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.369273 kubelet[2109]: E0209 19:26:45.368408 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.369273 kubelet[2109]: W0209 19:26:45.368426 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.369273 kubelet[2109]: E0209 19:26:45.368452 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.369273 kubelet[2109]: E0209 19:26:45.368757 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.369273 kubelet[2109]: W0209 19:26:45.368775 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.369273 kubelet[2109]: E0209 19:26:45.368799 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.371538 kubelet[2109]: E0209 19:26:45.369113 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.371538 kubelet[2109]: W0209 19:26:45.369131 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.371538 kubelet[2109]: E0209 19:26:45.369157 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.371538 kubelet[2109]: E0209 19:26:45.371310 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.371538 kubelet[2109]: W0209 19:26:45.371330 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.371538 kubelet[2109]: E0209 19:26:45.371359 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.379287 kubelet[2109]: E0209 19:26:45.379231 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.379287 kubelet[2109]: W0209 19:26:45.379263 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.379287 kubelet[2109]: E0209 19:26:45.379292 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.463300 kubelet[2109]: E0209 19:26:45.463053 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.463300 kubelet[2109]: W0209 19:26:45.463088 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.463300 kubelet[2109]: E0209 19:26:45.463122 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.467178 kubelet[2109]: E0209 19:26:45.467145 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.467384 kubelet[2109]: W0209 19:26:45.467355 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.467590 kubelet[2109]: E0209 19:26:45.467557 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.568720 kubelet[2109]: E0209 19:26:45.568682 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.569084 kubelet[2109]: W0209 19:26:45.569052 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.569269 kubelet[2109]: E0209 19:26:45.569245 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.569754 kubelet[2109]: E0209 19:26:45.569731 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.569964 kubelet[2109]: W0209 19:26:45.569934 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.570179 kubelet[2109]: E0209 19:26:45.570156 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.607819 env[1135]: time="2024-02-09T19:26:45.605734481Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:26:45.607819 env[1135]: time="2024-02-09T19:26:45.605825352Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:26:45.607819 env[1135]: time="2024-02-09T19:26:45.605857091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:26:45.607819 env[1135]: time="2024-02-09T19:26:45.606087984Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3d3e500e39e6b69b7a890b6f492abfe024e778a295ccf9e2036aa0c27c1097a9 pid=2604 runtime=io.containerd.runc.v2 Feb 9 19:26:45.621984 kubelet[2109]: E0209 19:26:45.621952 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.622435 kubelet[2109]: W0209 19:26:45.622405 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.622685 kubelet[2109]: E0209 19:26:45.622633 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.675064 kubelet[2109]: E0209 19:26:45.674996 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.675064 kubelet[2109]: W0209 19:26:45.675038 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.675064 kubelet[2109]: E0209 19:26:45.675064 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.683913 env[1135]: time="2024-02-09T19:26:45.683850880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-864947df6c-7kq64,Uid:fa234e83-45ab-4e9a-9ee9-e75c55c049d4,Namespace:calico-system,Attempt:0,} returns sandbox id \"3d3e500e39e6b69b7a890b6f492abfe024e778a295ccf9e2036aa0c27c1097a9\"" Feb 9 19:26:45.686736 env[1135]: time="2024-02-09T19:26:45.686706019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.27.0\"" Feb 9 19:26:45.710789 kubelet[2109]: E0209 19:26:45.710757 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:45.710789 kubelet[2109]: W0209 19:26:45.710777 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:45.710789 kubelet[2109]: E0209 19:26:45.710796 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:45.916380 env[1135]: time="2024-02-09T19:26:45.916272901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bhbt6,Uid:27a8874a-f11c-45a2-a9d3-c239f4fdac31,Namespace:calico-system,Attempt:0,}" Feb 9 19:26:45.946870 env[1135]: time="2024-02-09T19:26:45.946494785Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:26:45.946870 env[1135]: time="2024-02-09T19:26:45.946544589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:26:45.946870 env[1135]: time="2024-02-09T19:26:45.946558735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:26:45.946870 env[1135]: time="2024-02-09T19:26:45.946687416Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/59b0e7fc26a6d63207cfaa6fd7fb742017dcf5d4ab3b26788af3b3198181047e pid=2673 runtime=io.containerd.runc.v2 Feb 9 19:26:45.970000 audit[2690]: NETFILTER_CFG table=filter:107 family=2 entries=14 op=nft_register_rule pid=2690 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:45.970000 audit[2690]: SYSCALL arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffe7dcf2be0 a2=0 a3=7ffe7dcf2bcc items=0 ppid=2269 pid=2690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:45.970000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:45.971000 audit[2690]: NETFILTER_CFG table=nat:108 family=2 entries=20 op=nft_register_rule pid=2690 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:45.971000 audit[2690]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffe7dcf2be0 a2=0 a3=7ffe7dcf2bcc items=0 ppid=2269 pid=2690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:45.971000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:46.176858 env[1135]: time="2024-02-09T19:26:46.176816538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bhbt6,Uid:27a8874a-f11c-45a2-a9d3-c239f4fdac31,Namespace:calico-system,Attempt:0,} returns sandbox id \"59b0e7fc26a6d63207cfaa6fd7fb742017dcf5d4ab3b26788af3b3198181047e\"" Feb 9 19:26:46.557748 kubelet[2109]: E0209 19:26:46.557131 2109 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x2vsr" podUID=258b5f6f-f507-494e-8282-83a91907d3f5 Feb 9 19:26:47.215000 audit[2748]: NETFILTER_CFG table=filter:109 family=2 entries=14 op=nft_register_rule pid=2748 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:47.215000 audit[2748]: SYSCALL arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffcbbe28570 a2=0 a3=7ffcbbe2855c items=0 ppid=2269 pid=2748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:47.215000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:47.216000 audit[2748]: NETFILTER_CFG table=nat:110 family=2 entries=20 op=nft_register_rule pid=2748 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:47.216000 audit[2748]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffcbbe28570 a2=0 a3=7ffcbbe2855c items=0 ppid=2269 pid=2748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:47.216000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:48.178944 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1871087433.mount: Deactivated successfully. Feb 9 19:26:48.557489 kubelet[2109]: E0209 19:26:48.556878 2109 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x2vsr" podUID=258b5f6f-f507-494e-8282-83a91907d3f5 Feb 9 19:26:50.557716 kubelet[2109]: E0209 19:26:50.557672 2109 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x2vsr" podUID=258b5f6f-f507-494e-8282-83a91907d3f5 Feb 9 19:26:51.728967 env[1135]: time="2024-02-09T19:26:51.728785294Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:51.733289 env[1135]: time="2024-02-09T19:26:51.733233492Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b33768e0da1f8a5788a6a5d8ac2dcf15292ea9f3717de450f946c0a055b3532c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:51.738096 env[1135]: time="2024-02-09T19:26:51.737989515Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:51.741524 env[1135]: time="2024-02-09T19:26:51.741495405Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:5f2d3b8c354a4eb6de46e786889913916e620c6c256982fb8d0f1a1d36a282bc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:51.744912 env[1135]: time="2024-02-09T19:26:51.744842966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.27.0\" returns image reference \"sha256:b33768e0da1f8a5788a6a5d8ac2dcf15292ea9f3717de450f946c0a055b3532c\"" Feb 9 19:26:51.748226 env[1135]: time="2024-02-09T19:26:51.746889336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\"" Feb 9 19:26:51.768696 env[1135]: time="2024-02-09T19:26:51.768635943Z" level=info msg="CreateContainer within sandbox \"3d3e500e39e6b69b7a890b6f492abfe024e778a295ccf9e2036aa0c27c1097a9\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 9 19:26:51.796297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3543248328.mount: Deactivated successfully. Feb 9 19:26:51.802393 env[1135]: time="2024-02-09T19:26:51.802267161Z" level=info msg="CreateContainer within sandbox \"3d3e500e39e6b69b7a890b6f492abfe024e778a295ccf9e2036aa0c27c1097a9\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a89e1552ad5828e899f3243f05ed9399f9e1f9edf654c9f8a9d23a9cf35e83ac\"" Feb 9 19:26:51.805292 env[1135]: time="2024-02-09T19:26:51.803674962Z" level=info msg="StartContainer for \"a89e1552ad5828e899f3243f05ed9399f9e1f9edf654c9f8a9d23a9cf35e83ac\"" Feb 9 19:26:51.911926 env[1135]: time="2024-02-09T19:26:51.905054078Z" level=info msg="StartContainer for \"a89e1552ad5828e899f3243f05ed9399f9e1f9edf654c9f8a9d23a9cf35e83ac\" returns successfully" Feb 9 19:26:52.557979 kubelet[2109]: E0209 19:26:52.557864 2109 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x2vsr" podUID=258b5f6f-f507-494e-8282-83a91907d3f5 Feb 9 19:26:52.652502 env[1135]: time="2024-02-09T19:26:52.652385766Z" level=info msg="StopContainer for \"a89e1552ad5828e899f3243f05ed9399f9e1f9edf654c9f8a9d23a9cf35e83ac\" with timeout 300 (s)" Feb 9 19:26:52.657475 env[1135]: time="2024-02-09T19:26:52.653497462Z" level=info msg="Stop container \"a89e1552ad5828e899f3243f05ed9399f9e1f9edf654c9f8a9d23a9cf35e83ac\" with signal terminated" Feb 9 19:26:52.707329 kubelet[2109]: I0209 19:26:52.705097 2109 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-864947df6c-7kq64" podStartSLOduration=-9.223372028149767e+09 pod.CreationTimestamp="2024-02-09 19:26:44 +0000 UTC" firstStartedPulling="2024-02-09 19:26:45.686109209 +0000 UTC m=+21.377519361" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:26:52.697590579 +0000 UTC m=+28.389000802" watchObservedRunningTime="2024-02-09 19:26:52.705008008 +0000 UTC m=+28.396418230" Feb 9 19:26:52.752601 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a89e1552ad5828e899f3243f05ed9399f9e1f9edf654c9f8a9d23a9cf35e83ac-rootfs.mount: Deactivated successfully. Feb 9 19:26:52.801000 audit[2841]: NETFILTER_CFG table=filter:111 family=2 entries=13 op=nft_register_rule pid=2841 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:52.940280 kernel: kauditd_printk_skb: 14 callbacks suppressed Feb 9 19:26:52.940379 kernel: audit: type=1325 audit(1707506812.801:288): table=filter:111 family=2 entries=13 op=nft_register_rule pid=2841 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:52.940406 kernel: audit: type=1300 audit(1707506812.801:288): arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffc1f341180 a2=0 a3=7ffc1f34116c items=0 ppid=2269 pid=2841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:52.940430 kernel: audit: type=1327 audit(1707506812.801:288): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:52.940459 kernel: audit: type=1325 audit(1707506812.812:289): table=nat:112 family=2 entries=27 op=nft_register_chain pid=2841 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:52.940482 kernel: audit: type=1300 audit(1707506812.812:289): arch=c000003e syscall=46 success=yes exit=8836 a0=3 a1=7ffc1f341180 a2=0 a3=7ffc1f34116c items=0 ppid=2269 pid=2841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:52.940504 kernel: audit: type=1327 audit(1707506812.812:289): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:52.801000 audit[2841]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffc1f341180 a2=0 a3=7ffc1f34116c items=0 ppid=2269 pid=2841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:52.801000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:52.812000 audit[2841]: NETFILTER_CFG table=nat:112 family=2 entries=27 op=nft_register_chain pid=2841 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:52.812000 audit[2841]: SYSCALL arch=c000003e syscall=46 success=yes exit=8836 a0=3 a1=7ffc1f341180 a2=0 a3=7ffc1f34116c items=0 ppid=2269 pid=2841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:52.812000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:53.167583 env[1135]: time="2024-02-09T19:26:53.167496677Z" level=info msg="shim disconnected" id=a89e1552ad5828e899f3243f05ed9399f9e1f9edf654c9f8a9d23a9cf35e83ac Feb 9 19:26:53.168604 env[1135]: time="2024-02-09T19:26:53.168557588Z" level=warning msg="cleaning up after shim disconnected" id=a89e1552ad5828e899f3243f05ed9399f9e1f9edf654c9f8a9d23a9cf35e83ac namespace=k8s.io Feb 9 19:26:53.168773 env[1135]: time="2024-02-09T19:26:53.168738006Z" level=info msg="cleaning up dead shim" Feb 9 19:26:53.186326 env[1135]: time="2024-02-09T19:26:53.186248743Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:26:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2842 runtime=io.containerd.runc.v2\n" Feb 9 19:26:53.194254 env[1135]: time="2024-02-09T19:26:53.193500561Z" level=info msg="StopContainer for \"a89e1552ad5828e899f3243f05ed9399f9e1f9edf654c9f8a9d23a9cf35e83ac\" returns successfully" Feb 9 19:26:53.196009 env[1135]: time="2024-02-09T19:26:53.195870947Z" level=info msg="StopPodSandbox for \"3d3e500e39e6b69b7a890b6f492abfe024e778a295ccf9e2036aa0c27c1097a9\"" Feb 9 19:26:53.196205 env[1135]: time="2024-02-09T19:26:53.196077646Z" level=info msg="Container to stop \"a89e1552ad5828e899f3243f05ed9399f9e1f9edf654c9f8a9d23a9cf35e83ac\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:26:53.200878 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3d3e500e39e6b69b7a890b6f492abfe024e778a295ccf9e2036aa0c27c1097a9-shm.mount: Deactivated successfully. Feb 9 19:26:53.260394 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d3e500e39e6b69b7a890b6f492abfe024e778a295ccf9e2036aa0c27c1097a9-rootfs.mount: Deactivated successfully. Feb 9 19:26:53.276648 env[1135]: time="2024-02-09T19:26:53.276535333Z" level=info msg="shim disconnected" id=3d3e500e39e6b69b7a890b6f492abfe024e778a295ccf9e2036aa0c27c1097a9 Feb 9 19:26:53.276803 env[1135]: time="2024-02-09T19:26:53.276653545Z" level=warning msg="cleaning up after shim disconnected" id=3d3e500e39e6b69b7a890b6f492abfe024e778a295ccf9e2036aa0c27c1097a9 namespace=k8s.io Feb 9 19:26:53.276803 env[1135]: time="2024-02-09T19:26:53.276679564Z" level=info msg="cleaning up dead shim" Feb 9 19:26:53.289391 env[1135]: time="2024-02-09T19:26:53.289308991Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:26:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2876 runtime=io.containerd.runc.v2\n" Feb 9 19:26:53.290108 env[1135]: time="2024-02-09T19:26:53.290056313Z" level=info msg="TearDown network for sandbox \"3d3e500e39e6b69b7a890b6f492abfe024e778a295ccf9e2036aa0c27c1097a9\" successfully" Feb 9 19:26:53.290191 env[1135]: time="2024-02-09T19:26:53.290112498Z" level=info msg="StopPodSandbox for \"3d3e500e39e6b69b7a890b6f492abfe024e778a295ccf9e2036aa0c27c1097a9\" returns successfully" Feb 9 19:26:53.340619 kubelet[2109]: E0209 19:26:53.340371 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:53.340619 kubelet[2109]: W0209 19:26:53.340458 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:53.340619 kubelet[2109]: E0209 19:26:53.340513 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:53.340806 kubelet[2109]: I0209 19:26:53.340663 2109 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zppg\" (UniqueName: \"kubernetes.io/projected/fa234e83-45ab-4e9a-9ee9-e75c55c049d4-kube-api-access-5zppg\") pod \"fa234e83-45ab-4e9a-9ee9-e75c55c049d4\" (UID: \"fa234e83-45ab-4e9a-9ee9-e75c55c049d4\") " Feb 9 19:26:53.341206 kubelet[2109]: E0209 19:26:53.341181 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:53.341274 kubelet[2109]: W0209 19:26:53.341210 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:53.341309 kubelet[2109]: E0209 19:26:53.341282 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:53.341341 kubelet[2109]: I0209 19:26:53.341333 2109 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/fa234e83-45ab-4e9a-9ee9-e75c55c049d4-typha-certs\") pod \"fa234e83-45ab-4e9a-9ee9-e75c55c049d4\" (UID: \"fa234e83-45ab-4e9a-9ee9-e75c55c049d4\") " Feb 9 19:26:53.342281 kubelet[2109]: E0209 19:26:53.341858 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:53.342281 kubelet[2109]: W0209 19:26:53.341885 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:53.342281 kubelet[2109]: E0209 19:26:53.341965 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:53.342511 kubelet[2109]: E0209 19:26:53.342407 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:53.342511 kubelet[2109]: W0209 19:26:53.342427 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:53.342511 kubelet[2109]: E0209 19:26:53.342453 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:53.342511 kubelet[2109]: I0209 19:26:53.342500 2109 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa234e83-45ab-4e9a-9ee9-e75c55c049d4-tigera-ca-bundle\") pod \"fa234e83-45ab-4e9a-9ee9-e75c55c049d4\" (UID: \"fa234e83-45ab-4e9a-9ee9-e75c55c049d4\") " Feb 9 19:26:53.343586 kubelet[2109]: E0209 19:26:53.343532 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:53.343586 kubelet[2109]: W0209 19:26:53.343583 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:53.343741 kubelet[2109]: E0209 19:26:53.343613 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:53.352150 systemd[1]: var-lib-kubelet-pods-fa234e83\x2d45ab\x2d4e9a\x2d9ee9\x2de75c55c049d4-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Feb 9 19:26:53.357516 systemd[1]: var-lib-kubelet-pods-fa234e83\x2d45ab\x2d4e9a\x2d9ee9\x2de75c55c049d4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5zppg.mount: Deactivated successfully. Feb 9 19:26:53.359828 kubelet[2109]: I0209 19:26:53.359784 2109 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa234e83-45ab-4e9a-9ee9-e75c55c049d4-kube-api-access-5zppg" (OuterVolumeSpecName: "kube-api-access-5zppg") pod "fa234e83-45ab-4e9a-9ee9-e75c55c049d4" (UID: "fa234e83-45ab-4e9a-9ee9-e75c55c049d4"). InnerVolumeSpecName "kube-api-access-5zppg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:26:53.360295 kubelet[2109]: E0209 19:26:53.360275 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:53.360433 kubelet[2109]: W0209 19:26:53.360409 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:53.360567 kubelet[2109]: E0209 19:26:53.360547 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:53.360843 kubelet[2109]: W0209 19:26:53.360822 2109 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/fa234e83-45ab-4e9a-9ee9-e75c55c049d4/volumes/kubernetes.io~configmap/tigera-ca-bundle: clearQuota called, but quotas disabled Feb 9 19:26:53.361372 kubelet[2109]: I0209 19:26:53.361339 2109 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa234e83-45ab-4e9a-9ee9-e75c55c049d4-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "fa234e83-45ab-4e9a-9ee9-e75c55c049d4" (UID: "fa234e83-45ab-4e9a-9ee9-e75c55c049d4"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:26:53.365189 kubelet[2109]: I0209 19:26:53.365100 2109 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa234e83-45ab-4e9a-9ee9-e75c55c049d4-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "fa234e83-45ab-4e9a-9ee9-e75c55c049d4" (UID: "fa234e83-45ab-4e9a-9ee9-e75c55c049d4"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:26:53.443816 kubelet[2109]: I0209 19:26:53.443767 2109 reconciler_common.go:295] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/fa234e83-45ab-4e9a-9ee9-e75c55c049d4-typha-certs\") on node \"ci-3510-3-2-b-76a749f546.novalocal\" DevicePath \"\"" Feb 9 19:26:53.444233 kubelet[2109]: I0209 19:26:53.444178 2109 reconciler_common.go:295] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa234e83-45ab-4e9a-9ee9-e75c55c049d4-tigera-ca-bundle\") on node \"ci-3510-3-2-b-76a749f546.novalocal\" DevicePath \"\"" Feb 9 19:26:53.445168 kubelet[2109]: I0209 19:26:53.444483 2109 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-5zppg\" (UniqueName: \"kubernetes.io/projected/fa234e83-45ab-4e9a-9ee9-e75c55c049d4-kube-api-access-5zppg\") on node \"ci-3510-3-2-b-76a749f546.novalocal\" DevicePath \"\"" Feb 9 19:26:53.657175 kubelet[2109]: I0209 19:26:53.657123 2109 scope.go:115] "RemoveContainer" containerID="a89e1552ad5828e899f3243f05ed9399f9e1f9edf654c9f8a9d23a9cf35e83ac" Feb 9 19:26:53.669861 env[1135]: time="2024-02-09T19:26:53.669528117Z" level=info msg="RemoveContainer for \"a89e1552ad5828e899f3243f05ed9399f9e1f9edf654c9f8a9d23a9cf35e83ac\"" Feb 9 19:26:53.724955 kubelet[2109]: I0209 19:26:53.724084 2109 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:26:53.724955 kubelet[2109]: E0209 19:26:53.724175 2109 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fa234e83-45ab-4e9a-9ee9-e75c55c049d4" containerName="calico-typha" Feb 9 19:26:53.724955 kubelet[2109]: I0209 19:26:53.724229 2109 memory_manager.go:346] "RemoveStaleState removing state" podUID="fa234e83-45ab-4e9a-9ee9-e75c55c049d4" containerName="calico-typha" Feb 9 19:26:53.743563 env[1135]: time="2024-02-09T19:26:53.743523579Z" level=info msg="RemoveContainer for \"a89e1552ad5828e899f3243f05ed9399f9e1f9edf654c9f8a9d23a9cf35e83ac\" returns successfully" Feb 9 19:26:53.745626 kubelet[2109]: I0209 19:26:53.745586 2109 scope.go:115] "RemoveContainer" containerID="a89e1552ad5828e899f3243f05ed9399f9e1f9edf654c9f8a9d23a9cf35e83ac" Feb 9 19:26:53.746618 env[1135]: time="2024-02-09T19:26:53.746393214Z" level=error msg="ContainerStatus for \"a89e1552ad5828e899f3243f05ed9399f9e1f9edf654c9f8a9d23a9cf35e83ac\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a89e1552ad5828e899f3243f05ed9399f9e1f9edf654c9f8a9d23a9cf35e83ac\": not found" Feb 9 19:26:53.747340 kubelet[2109]: E0209 19:26:53.747322 2109 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a89e1552ad5828e899f3243f05ed9399f9e1f9edf654c9f8a9d23a9cf35e83ac\": not found" containerID="a89e1552ad5828e899f3243f05ed9399f9e1f9edf654c9f8a9d23a9cf35e83ac" Feb 9 19:26:53.747481 kubelet[2109]: I0209 19:26:53.747463 2109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:a89e1552ad5828e899f3243f05ed9399f9e1f9edf654c9f8a9d23a9cf35e83ac} err="failed to get container status \"a89e1552ad5828e899f3243f05ed9399f9e1f9edf654c9f8a9d23a9cf35e83ac\": rpc error: code = NotFound desc = an error occurred when try to find container \"a89e1552ad5828e899f3243f05ed9399f9e1f9edf654c9f8a9d23a9cf35e83ac\": not found" Feb 9 19:26:53.752491 systemd[1]: var-lib-kubelet-pods-fa234e83\x2d45ab\x2d4e9a\x2d9ee9\x2de75c55c049d4-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Feb 9 19:26:53.789000 audit[2939]: NETFILTER_CFG table=filter:113 family=2 entries=13 op=nft_register_rule pid=2939 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:53.789000 audit[2939]: SYSCALL arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffee8f655f0 a2=0 a3=7ffee8f655dc items=0 ppid=2269 pid=2939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:53.798545 kernel: audit: type=1325 audit(1707506813.789:290): table=filter:113 family=2 entries=13 op=nft_register_rule pid=2939 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:53.798619 kernel: audit: type=1300 audit(1707506813.789:290): arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffee8f655f0 a2=0 a3=7ffee8f655dc items=0 ppid=2269 pid=2939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:53.789000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:53.801450 kernel: audit: type=1327 audit(1707506813.789:290): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:53.801510 kernel: audit: type=1325 audit(1707506813.789:291): table=nat:114 family=2 entries=27 op=nft_unregister_chain pid=2939 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:53.789000 audit[2939]: NETFILTER_CFG table=nat:114 family=2 entries=27 op=nft_unregister_chain pid=2939 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:53.789000 audit[2939]: SYSCALL arch=c000003e syscall=46 success=yes exit=5596 a0=3 a1=7ffee8f655f0 a2=0 a3=7ffee8f655dc items=0 ppid=2269 pid=2939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:53.789000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:53.817283 kubelet[2109]: E0209 19:26:53.817261 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:53.817459 kubelet[2109]: W0209 19:26:53.817444 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:53.817551 kubelet[2109]: E0209 19:26:53.817540 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:53.817824 kubelet[2109]: E0209 19:26:53.817814 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:53.817921 kubelet[2109]: W0209 19:26:53.817884 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:53.818048 kubelet[2109]: E0209 19:26:53.818033 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:53.818614 kubelet[2109]: E0209 19:26:53.818602 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:53.818706 kubelet[2109]: W0209 19:26:53.818694 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:53.818780 kubelet[2109]: E0209 19:26:53.818768 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:53.819141 kubelet[2109]: E0209 19:26:53.819130 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:53.819273 kubelet[2109]: W0209 19:26:53.819260 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:53.819342 kubelet[2109]: E0209 19:26:53.819333 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:53.819660 kubelet[2109]: E0209 19:26:53.819648 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:53.819732 kubelet[2109]: W0209 19:26:53.819721 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:53.819805 kubelet[2109]: E0209 19:26:53.819795 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:53.820224 kubelet[2109]: E0209 19:26:53.820195 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:53.820298 kubelet[2109]: W0209 19:26:53.820286 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:53.820372 kubelet[2109]: E0209 19:26:53.820363 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:53.820684 kubelet[2109]: E0209 19:26:53.820672 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:53.820763 kubelet[2109]: W0209 19:26:53.820752 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:53.820889 kubelet[2109]: E0209 19:26:53.820877 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:53.821202 kubelet[2109]: E0209 19:26:53.821191 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:53.821301 kubelet[2109]: W0209 19:26:53.821286 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:53.821418 kubelet[2109]: E0209 19:26:53.821406 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:53.821677 kubelet[2109]: E0209 19:26:53.821666 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:53.821770 kubelet[2109]: W0209 19:26:53.821757 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:53.821844 kubelet[2109]: E0209 19:26:53.821835 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:53.840000 audit[2974]: NETFILTER_CFG table=filter:115 family=2 entries=14 op=nft_register_rule pid=2974 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:53.840000 audit[2974]: SYSCALL arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffcf0e48c40 a2=0 a3=7ffcf0e48c2c items=0 ppid=2269 pid=2974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:53.840000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:53.841000 audit[2974]: NETFILTER_CFG table=nat:116 family=2 entries=20 op=nft_register_rule pid=2974 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:53.841000 audit[2974]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffcf0e48c40 a2=0 a3=7ffcf0e48c2c items=0 ppid=2269 pid=2974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:53.841000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:53.848827 kubelet[2109]: E0209 19:26:53.848790 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:53.848827 kubelet[2109]: W0209 19:26:53.848821 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:53.848975 kubelet[2109]: E0209 19:26:53.848847 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:53.848975 kubelet[2109]: I0209 19:26:53.848882 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b23a4019-61d7-43f0-b02c-d965986ce42a-typha-certs\") pod \"calico-typha-5fcbcb486f-klg84\" (UID: \"b23a4019-61d7-43f0-b02c-d965986ce42a\") " pod="calico-system/calico-typha-5fcbcb486f-klg84" Feb 9 19:26:53.849205 kubelet[2109]: E0209 19:26:53.849175 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:53.849205 kubelet[2109]: W0209 19:26:53.849194 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:53.849205 kubelet[2109]: E0209 19:26:53.849208 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:53.849321 kubelet[2109]: I0209 19:26:53.849233 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b23a4019-61d7-43f0-b02c-d965986ce42a-tigera-ca-bundle\") pod \"calico-typha-5fcbcb486f-klg84\" (UID: \"b23a4019-61d7-43f0-b02c-d965986ce42a\") " pod="calico-system/calico-typha-5fcbcb486f-klg84" Feb 9 19:26:53.849491 kubelet[2109]: E0209 19:26:53.849471 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:53.849491 kubelet[2109]: W0209 19:26:53.849488 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:53.849574 kubelet[2109]: E0209 19:26:53.849501 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:53.849574 kubelet[2109]: I0209 19:26:53.849525 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smlh4\" (UniqueName: \"kubernetes.io/projected/b23a4019-61d7-43f0-b02c-d965986ce42a-kube-api-access-smlh4\") pod \"calico-typha-5fcbcb486f-klg84\" (UID: \"b23a4019-61d7-43f0-b02c-d965986ce42a\") " pod="calico-system/calico-typha-5fcbcb486f-klg84" Feb 9 19:26:53.850689 kubelet[2109]: E0209 19:26:53.850250 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:53.850689 kubelet[2109]: W0209 19:26:53.850270 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:53.850689 kubelet[2109]: E0209 19:26:53.850284 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:53.850689 kubelet[2109]: E0209 19:26:53.850477 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:53.850689 kubelet[2109]: W0209 19:26:53.850486 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:53.850689 kubelet[2109]: E0209 19:26:53.850499 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:53.850689 kubelet[2109]: E0209 19:26:53.850675 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:53.850689 kubelet[2109]: W0209 19:26:53.850686 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:53.850965 kubelet[2109]: E0209 19:26:53.850699 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:53.850965 kubelet[2109]: E0209 19:26:53.850853 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:53.850965 kubelet[2109]: W0209 19:26:53.850867 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:53.850965 kubelet[2109]: E0209 19:26:53.850879 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:53.851071 kubelet[2109]: E0209 19:26:53.851049 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:53.851071 kubelet[2109]: W0209 19:26:53.851059 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:53.851071 kubelet[2109]: E0209 19:26:53.851072 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:53.851254 kubelet[2109]: E0209 19:26:53.851231 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:53.851254 kubelet[2109]: W0209 19:26:53.851247 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:53.851333 kubelet[2109]: E0209 19:26:53.851260 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:53.951165 kubelet[2109]: E0209 19:26:53.951116 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:53.951165 kubelet[2109]: W0209 19:26:53.951156 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:53.951340 kubelet[2109]: E0209 19:26:53.951196 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:53.951594 kubelet[2109]: E0209 19:26:53.951570 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:53.951594 kubelet[2109]: W0209 19:26:53.951592 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:53.951678 kubelet[2109]: E0209 19:26:53.951624 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:53.952094 kubelet[2109]: E0209 19:26:53.952069 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:53.952143 kubelet[2109]: W0209 19:26:53.952097 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:53.952189 kubelet[2109]: E0209 19:26:53.952170 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:53.955287 kubelet[2109]: E0209 19:26:53.955254 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:53.955287 kubelet[2109]: W0209 19:26:53.955283 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:53.955416 kubelet[2109]: E0209 19:26:53.955318 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:53.955796 kubelet[2109]: E0209 19:26:53.955567 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:53.955796 kubelet[2109]: W0209 19:26:53.955592 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:53.955796 kubelet[2109]: E0209 19:26:53.955615 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:53.955945 kubelet[2109]: E0209 19:26:53.955835 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:53.955945 kubelet[2109]: W0209 19:26:53.955851 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:53.956043 kubelet[2109]: E0209 19:26:53.956015 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:53.956302 kubelet[2109]: E0209 19:26:53.956277 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:53.956364 kubelet[2109]: W0209 19:26:53.956302 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:53.956496 kubelet[2109]: E0209 19:26:53.956448 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:53.956564 kubelet[2109]: E0209 19:26:53.956542 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:53.956564 kubelet[2109]: W0209 19:26:53.956557 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:53.956699 kubelet[2109]: E0209 19:26:53.956671 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:53.956872 kubelet[2109]: E0209 19:26:53.956844 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:53.956872 kubelet[2109]: W0209 19:26:53.956868 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:53.957322 kubelet[2109]: E0209 19:26:53.956930 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:53.957322 kubelet[2109]: E0209 19:26:53.957159 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:53.957322 kubelet[2109]: W0209 19:26:53.957172 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:53.957322 kubelet[2109]: E0209 19:26:53.957193 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:53.958424 kubelet[2109]: E0209 19:26:53.958370 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:53.958424 kubelet[2109]: W0209 19:26:53.958384 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:53.958424 kubelet[2109]: E0209 19:26:53.958408 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:53.958807 kubelet[2109]: E0209 19:26:53.958588 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:53.958807 kubelet[2109]: W0209 19:26:53.958598 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:53.958807 kubelet[2109]: E0209 19:26:53.958610 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:53.959422 kubelet[2109]: E0209 19:26:53.959403 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:53.959551 kubelet[2109]: W0209 19:26:53.959531 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:53.959705 kubelet[2109]: E0209 19:26:53.959658 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:53.960088 kubelet[2109]: E0209 19:26:53.960070 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:53.960220 kubelet[2109]: W0209 19:26:53.960200 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:53.960343 kubelet[2109]: E0209 19:26:53.960327 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:53.960706 kubelet[2109]: E0209 19:26:53.960687 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:53.960829 kubelet[2109]: W0209 19:26:53.960810 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:53.961018 kubelet[2109]: E0209 19:26:53.961001 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:53.961433 kubelet[2109]: E0209 19:26:53.961413 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:53.961561 kubelet[2109]: W0209 19:26:53.961541 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:53.961686 kubelet[2109]: E0209 19:26:53.961670 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:53.969979 kubelet[2109]: E0209 19:26:53.969476 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:53.969979 kubelet[2109]: W0209 19:26:53.969498 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:53.969979 kubelet[2109]: E0209 19:26:53.969564 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:54.053539 kubelet[2109]: E0209 19:26:54.053338 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:54.057077 kubelet[2109]: W0209 19:26:54.057034 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:54.057320 kubelet[2109]: E0209 19:26:54.057294 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:54.064961 kubelet[2109]: E0209 19:26:54.062402 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:54.064961 kubelet[2109]: W0209 19:26:54.062425 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:54.064961 kubelet[2109]: E0209 19:26:54.062460 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:54.108782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2847116958.mount: Deactivated successfully. Feb 9 19:26:54.328134 env[1135]: time="2024-02-09T19:26:54.327999195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5fcbcb486f-klg84,Uid:b23a4019-61d7-43f0-b02c-d965986ce42a,Namespace:calico-system,Attempt:0,}" Feb 9 19:26:54.352414 env[1135]: time="2024-02-09T19:26:54.352318105Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:26:54.352572 env[1135]: time="2024-02-09T19:26:54.352393155Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:26:54.352572 env[1135]: time="2024-02-09T19:26:54.352408474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:26:54.353146 env[1135]: time="2024-02-09T19:26:54.353018408Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e959c066bd21a17a40d419135292c42aa16f49e177fd584063a826a1307ee23a pid=3011 runtime=io.containerd.runc.v2 Feb 9 19:26:54.473306 env[1135]: time="2024-02-09T19:26:54.473220424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5fcbcb486f-klg84,Uid:b23a4019-61d7-43f0-b02c-d965986ce42a,Namespace:calico-system,Attempt:0,} returns sandbox id \"e959c066bd21a17a40d419135292c42aa16f49e177fd584063a826a1307ee23a\"" Feb 9 19:26:54.487130 env[1135]: time="2024-02-09T19:26:54.487079978Z" level=info msg="CreateContainer within sandbox \"e959c066bd21a17a40d419135292c42aa16f49e177fd584063a826a1307ee23a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 9 19:26:54.558037 kubelet[2109]: E0209 19:26:54.557413 2109 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x2vsr" podUID=258b5f6f-f507-494e-8282-83a91907d3f5 Feb 9 19:26:54.562570 kubelet[2109]: I0209 19:26:54.562526 2109 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=fa234e83-45ab-4e9a-9ee9-e75c55c049d4 path="/var/lib/kubelet/pods/fa234e83-45ab-4e9a-9ee9-e75c55c049d4/volumes" Feb 9 19:26:55.212087 env[1135]: time="2024-02-09T19:26:55.211883338Z" level=info msg="CreateContainer within sandbox \"e959c066bd21a17a40d419135292c42aa16f49e177fd584063a826a1307ee23a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"afcadb0a26459d1131f16fd46c678006fb7557d19757271247649abccc738f89\"" Feb 9 19:26:55.213782 env[1135]: time="2024-02-09T19:26:55.213726175Z" level=info msg="StartContainer for \"afcadb0a26459d1131f16fd46c678006fb7557d19757271247649abccc738f89\"" Feb 9 19:26:55.268653 systemd[1]: run-containerd-runc-k8s.io-afcadb0a26459d1131f16fd46c678006fb7557d19757271247649abccc738f89-runc.7fhKgm.mount: Deactivated successfully. Feb 9 19:26:55.384832 env[1135]: time="2024-02-09T19:26:55.384736153Z" level=info msg="StartContainer for \"afcadb0a26459d1131f16fd46c678006fb7557d19757271247649abccc738f89\" returns successfully" Feb 9 19:26:55.686540 kubelet[2109]: I0209 19:26:55.686295 2109 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-5fcbcb486f-klg84" podStartSLOduration=10.686212318 pod.CreationTimestamp="2024-02-09 19:26:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:26:55.685604187 +0000 UTC m=+31.377014360" watchObservedRunningTime="2024-02-09 19:26:55.686212318 +0000 UTC m=+31.377622480" Feb 9 19:26:55.737739 kubelet[2109]: E0209 19:26:55.737498 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:55.737739 kubelet[2109]: W0209 19:26:55.737524 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:55.737739 kubelet[2109]: E0209 19:26:55.737579 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:55.738427 kubelet[2109]: E0209 19:26:55.738152 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:55.738427 kubelet[2109]: W0209 19:26:55.738201 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:55.738427 kubelet[2109]: E0209 19:26:55.738223 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:55.739070 kubelet[2109]: E0209 19:26:55.738737 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:55.739070 kubelet[2109]: W0209 19:26:55.738781 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:55.739070 kubelet[2109]: E0209 19:26:55.738803 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:55.739801 kubelet[2109]: E0209 19:26:55.739347 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:55.739801 kubelet[2109]: W0209 19:26:55.739362 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:55.739801 kubelet[2109]: E0209 19:26:55.739381 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:55.739801 kubelet[2109]: E0209 19:26:55.739669 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:55.739801 kubelet[2109]: W0209 19:26:55.739691 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:55.739801 kubelet[2109]: E0209 19:26:55.739709 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:55.740463 kubelet[2109]: E0209 19:26:55.740248 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:55.740463 kubelet[2109]: W0209 19:26:55.740264 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:55.740463 kubelet[2109]: E0209 19:26:55.740283 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:55.740711 kubelet[2109]: E0209 19:26:55.740697 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:55.740797 kubelet[2109]: W0209 19:26:55.740782 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:55.740883 kubelet[2109]: E0209 19:26:55.740872 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:55.741599 kubelet[2109]: E0209 19:26:55.741581 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:55.741702 kubelet[2109]: W0209 19:26:55.741685 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:55.741801 kubelet[2109]: E0209 19:26:55.741786 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:55.742186 kubelet[2109]: E0209 19:26:55.742169 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:55.742290 kubelet[2109]: W0209 19:26:55.742273 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:55.742389 kubelet[2109]: E0209 19:26:55.742375 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:55.742820 kubelet[2109]: E0209 19:26:55.742803 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:55.742950 kubelet[2109]: W0209 19:26:55.742932 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:55.743079 kubelet[2109]: E0209 19:26:55.743064 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:55.743584 kubelet[2109]: E0209 19:26:55.743567 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:55.743679 kubelet[2109]: W0209 19:26:55.743665 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:55.744007 kubelet[2109]: E0209 19:26:55.743975 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:55.744336 kubelet[2109]: E0209 19:26:55.744319 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:55.744521 kubelet[2109]: W0209 19:26:55.744503 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:55.744650 kubelet[2109]: E0209 19:26:55.744635 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:55.775964 kubelet[2109]: E0209 19:26:55.775937 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:55.776199 kubelet[2109]: W0209 19:26:55.776128 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:55.776307 kubelet[2109]: E0209 19:26:55.776292 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:55.776690 kubelet[2109]: E0209 19:26:55.776675 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:55.776784 kubelet[2109]: W0209 19:26:55.776769 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:55.776910 kubelet[2109]: E0209 19:26:55.776877 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:55.777399 kubelet[2109]: E0209 19:26:55.777339 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:55.777518 kubelet[2109]: W0209 19:26:55.777394 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:55.777518 kubelet[2109]: E0209 19:26:55.777461 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:55.777889 kubelet[2109]: E0209 19:26:55.777858 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:55.777990 kubelet[2109]: W0209 19:26:55.777889 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:55.777990 kubelet[2109]: E0209 19:26:55.777979 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:55.778439 kubelet[2109]: E0209 19:26:55.778402 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:55.778526 kubelet[2109]: W0209 19:26:55.778438 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:55.778640 kubelet[2109]: E0209 19:26:55.778619 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:55.778831 kubelet[2109]: E0209 19:26:55.778798 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:55.778973 kubelet[2109]: W0209 19:26:55.778832 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:55.779101 kubelet[2109]: E0209 19:26:55.779067 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:55.779511 kubelet[2109]: E0209 19:26:55.779432 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:55.779511 kubelet[2109]: W0209 19:26:55.779467 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:55.779684 kubelet[2109]: E0209 19:26:55.779661 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:55.779974 kubelet[2109]: E0209 19:26:55.779861 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:55.779974 kubelet[2109]: W0209 19:26:55.779893 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:55.780098 kubelet[2109]: E0209 19:26:55.779990 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:55.781042 kubelet[2109]: E0209 19:26:55.780997 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:55.781042 kubelet[2109]: W0209 19:26:55.781033 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:55.781233 kubelet[2109]: E0209 19:26:55.781212 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:55.781488 kubelet[2109]: E0209 19:26:55.781456 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:55.781582 kubelet[2109]: W0209 19:26:55.781490 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:55.781701 kubelet[2109]: E0209 19:26:55.781668 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:55.782146 kubelet[2109]: E0209 19:26:55.782096 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:55.782146 kubelet[2109]: W0209 19:26:55.782134 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:55.782291 kubelet[2109]: E0209 19:26:55.782190 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:55.782558 kubelet[2109]: E0209 19:26:55.782526 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:55.782631 kubelet[2109]: W0209 19:26:55.782557 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:55.782631 kubelet[2109]: E0209 19:26:55.782619 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:55.783123 kubelet[2109]: E0209 19:26:55.783085 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:55.783215 kubelet[2109]: W0209 19:26:55.783122 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:55.783313 kubelet[2109]: E0209 19:26:55.783294 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:55.783511 kubelet[2109]: E0209 19:26:55.783472 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:55.783569 kubelet[2109]: W0209 19:26:55.783508 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:55.783569 kubelet[2109]: E0209 19:26:55.783540 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:55.784519 kubelet[2109]: E0209 19:26:55.784486 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:55.784692 kubelet[2109]: W0209 19:26:55.784519 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:55.784692 kubelet[2109]: E0209 19:26:55.784563 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:55.786934 kubelet[2109]: E0209 19:26:55.786876 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:55.787069 kubelet[2109]: W0209 19:26:55.786964 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:55.787212 kubelet[2109]: E0209 19:26:55.787188 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:55.787400 kubelet[2109]: E0209 19:26:55.787345 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:55.787400 kubelet[2109]: W0209 19:26:55.787396 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:55.787515 kubelet[2109]: E0209 19:26:55.787430 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:55.788710 kubelet[2109]: E0209 19:26:55.788693 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:55.789026 kubelet[2109]: W0209 19:26:55.788763 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:55.789026 kubelet[2109]: E0209 19:26:55.788788 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:56.559649 kubelet[2109]: E0209 19:26:56.557740 2109 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x2vsr" podUID=258b5f6f-f507-494e-8282-83a91907d3f5 Feb 9 19:26:56.666280 kubelet[2109]: I0209 19:26:56.666234 2109 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 9 19:26:56.753639 kubelet[2109]: E0209 19:26:56.753597 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:56.753639 kubelet[2109]: W0209 19:26:56.753630 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:56.753639 kubelet[2109]: E0209 19:26:56.753660 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:56.754382 kubelet[2109]: E0209 19:26:56.753884 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:56.754382 kubelet[2109]: W0209 19:26:56.753951 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:56.754382 kubelet[2109]: E0209 19:26:56.753975 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:56.754382 kubelet[2109]: E0209 19:26:56.754275 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:56.754382 kubelet[2109]: W0209 19:26:56.754290 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:56.754382 kubelet[2109]: E0209 19:26:56.754312 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:56.754589 kubelet[2109]: E0209 19:26:56.754559 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:56.754589 kubelet[2109]: W0209 19:26:56.754585 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:56.754680 kubelet[2109]: E0209 19:26:56.754607 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:56.754850 kubelet[2109]: E0209 19:26:56.754828 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:56.754910 kubelet[2109]: W0209 19:26:56.754850 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:56.754910 kubelet[2109]: E0209 19:26:56.754872 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:56.755142 kubelet[2109]: E0209 19:26:56.755118 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:56.755142 kubelet[2109]: W0209 19:26:56.755140 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:56.755225 kubelet[2109]: E0209 19:26:56.755161 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:56.755478 kubelet[2109]: E0209 19:26:56.755454 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:56.755478 kubelet[2109]: W0209 19:26:56.755477 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:56.755574 kubelet[2109]: E0209 19:26:56.755498 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:56.755743 kubelet[2109]: E0209 19:26:56.755720 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:56.755798 kubelet[2109]: W0209 19:26:56.755742 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:56.755798 kubelet[2109]: E0209 19:26:56.755766 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:56.756082 kubelet[2109]: E0209 19:26:56.756050 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:56.756082 kubelet[2109]: W0209 19:26:56.756072 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:56.756167 kubelet[2109]: E0209 19:26:56.756092 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:56.756352 kubelet[2109]: E0209 19:26:56.756330 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:56.756405 kubelet[2109]: W0209 19:26:56.756352 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:56.756405 kubelet[2109]: E0209 19:26:56.756372 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:56.756626 kubelet[2109]: E0209 19:26:56.756602 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:56.756626 kubelet[2109]: W0209 19:26:56.756624 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:56.756715 kubelet[2109]: E0209 19:26:56.756645 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:56.756890 kubelet[2109]: E0209 19:26:56.756867 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:56.756951 kubelet[2109]: W0209 19:26:56.756889 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:56.756951 kubelet[2109]: E0209 19:26:56.756949 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:56.786553 kubelet[2109]: E0209 19:26:56.786409 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:56.786553 kubelet[2109]: W0209 19:26:56.786435 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:56.786553 kubelet[2109]: E0209 19:26:56.786460 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:56.787293 kubelet[2109]: E0209 19:26:56.787255 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:56.787293 kubelet[2109]: W0209 19:26:56.787269 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:56.787293 kubelet[2109]: E0209 19:26:56.787288 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:56.787840 kubelet[2109]: E0209 19:26:56.787802 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:56.787840 kubelet[2109]: W0209 19:26:56.787815 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:56.787840 kubelet[2109]: E0209 19:26:56.787834 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:56.788470 kubelet[2109]: E0209 19:26:56.788448 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:56.788470 kubelet[2109]: W0209 19:26:56.788464 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:56.788733 kubelet[2109]: E0209 19:26:56.788699 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:56.789255 kubelet[2109]: E0209 19:26:56.789232 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:56.789255 kubelet[2109]: W0209 19:26:56.789246 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:56.789523 kubelet[2109]: E0209 19:26:56.789496 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:56.789849 kubelet[2109]: E0209 19:26:56.789824 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:56.789849 kubelet[2109]: W0209 19:26:56.789839 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:56.790174 kubelet[2109]: E0209 19:26:56.790147 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:56.790507 kubelet[2109]: E0209 19:26:56.790481 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:56.790507 kubelet[2109]: W0209 19:26:56.790497 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:56.790507 kubelet[2109]: E0209 19:26:56.790519 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:56.791065 kubelet[2109]: E0209 19:26:56.791041 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:56.791065 kubelet[2109]: W0209 19:26:56.791055 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:56.791412 kubelet[2109]: E0209 19:26:56.791385 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:56.791766 kubelet[2109]: E0209 19:26:56.791740 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:56.791766 kubelet[2109]: W0209 19:26:56.791755 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:56.792088 kubelet[2109]: E0209 19:26:56.792063 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:56.792426 kubelet[2109]: E0209 19:26:56.792404 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:56.792426 kubelet[2109]: W0209 19:26:56.792418 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:56.792664 kubelet[2109]: E0209 19:26:56.792640 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:56.793038 kubelet[2109]: E0209 19:26:56.793015 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:56.793038 kubelet[2109]: W0209 19:26:56.793029 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:56.793038 kubelet[2109]: E0209 19:26:56.793051 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:56.793639 kubelet[2109]: E0209 19:26:56.793618 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:56.793797 kubelet[2109]: W0209 19:26:56.793770 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:56.798805 kubelet[2109]: E0209 19:26:56.794028 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:56.798805 kubelet[2109]: E0209 19:26:56.794407 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:56.798805 kubelet[2109]: W0209 19:26:56.794427 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:56.798805 kubelet[2109]: E0209 19:26:56.794503 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:56.798805 kubelet[2109]: E0209 19:26:56.794816 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:56.798805 kubelet[2109]: W0209 19:26:56.794837 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:56.798805 kubelet[2109]: E0209 19:26:56.794976 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:56.798805 kubelet[2109]: E0209 19:26:56.795280 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:56.798805 kubelet[2109]: W0209 19:26:56.795298 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:56.798805 kubelet[2109]: E0209 19:26:56.795437 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:56.799506 kubelet[2109]: E0209 19:26:56.795625 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:56.799506 kubelet[2109]: W0209 19:26:56.795642 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:56.799506 kubelet[2109]: E0209 19:26:56.795684 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:56.799506 kubelet[2109]: E0209 19:26:56.796015 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:56.799506 kubelet[2109]: W0209 19:26:56.796032 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:56.799506 kubelet[2109]: E0209 19:26:56.796058 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:56.799506 kubelet[2109]: E0209 19:26:56.796538 2109 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:56.799506 kubelet[2109]: W0209 19:26:56.796568 2109 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:56.799506 kubelet[2109]: E0209 19:26:56.796671 2109 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:56.856019 env[1135]: time="2024-02-09T19:26:56.849945375Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:56.858436 env[1135]: time="2024-02-09T19:26:56.858362296Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6506d2e0be2d5ec9cb8dbe00c4b4f037c67b6ab4ec14a1f0c83333ac51f4da9a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:56.864422 env[1135]: time="2024-02-09T19:26:56.864365489Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:56.870483 env[1135]: time="2024-02-09T19:26:56.870420840Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b05edbd1f80db4ada229e6001a666a7dd36bb6ab617143684fb3d28abfc4b71e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:56.871804 env[1135]: time="2024-02-09T19:26:56.871773959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\" returns image reference \"sha256:6506d2e0be2d5ec9cb8dbe00c4b4f037c67b6ab4ec14a1f0c83333ac51f4da9a\"" Feb 9 19:26:56.877272 env[1135]: time="2024-02-09T19:26:56.877228253Z" level=info msg="CreateContainer within sandbox \"59b0e7fc26a6d63207cfaa6fd7fb742017dcf5d4ab3b26788af3b3198181047e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 9 19:26:56.896280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount457775433.mount: Deactivated successfully. Feb 9 19:26:56.922930 env[1135]: time="2024-02-09T19:26:56.922855959Z" level=info msg="CreateContainer within sandbox \"59b0e7fc26a6d63207cfaa6fd7fb742017dcf5d4ab3b26788af3b3198181047e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d0c0578abd031d2402f6d55dd79dc45f4efe1d3c3c86be6e5b09fdfcb3b81b2b\"" Feb 9 19:26:56.925531 env[1135]: time="2024-02-09T19:26:56.925148902Z" level=info msg="StartContainer for \"d0c0578abd031d2402f6d55dd79dc45f4efe1d3c3c86be6e5b09fdfcb3b81b2b\"" Feb 9 19:26:57.020313 env[1135]: time="2024-02-09T19:26:57.018586739Z" level=info msg="StartContainer for \"d0c0578abd031d2402f6d55dd79dc45f4efe1d3c3c86be6e5b09fdfcb3b81b2b\" returns successfully" Feb 9 19:26:57.054734 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0c0578abd031d2402f6d55dd79dc45f4efe1d3c3c86be6e5b09fdfcb3b81b2b-rootfs.mount: Deactivated successfully. Feb 9 19:26:57.216414 env[1135]: time="2024-02-09T19:26:57.216313677Z" level=info msg="shim disconnected" id=d0c0578abd031d2402f6d55dd79dc45f4efe1d3c3c86be6e5b09fdfcb3b81b2b Feb 9 19:26:57.217106 env[1135]: time="2024-02-09T19:26:57.217059245Z" level=warning msg="cleaning up after shim disconnected" id=d0c0578abd031d2402f6d55dd79dc45f4efe1d3c3c86be6e5b09fdfcb3b81b2b namespace=k8s.io Feb 9 19:26:57.217308 env[1135]: time="2024-02-09T19:26:57.217272194Z" level=info msg="cleaning up dead shim" Feb 9 19:26:57.234333 env[1135]: time="2024-02-09T19:26:57.234222086Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:26:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3192 runtime=io.containerd.runc.v2\n" Feb 9 19:26:57.675099 env[1135]: time="2024-02-09T19:26:57.675023840Z" level=info msg="StopPodSandbox for \"59b0e7fc26a6d63207cfaa6fd7fb742017dcf5d4ab3b26788af3b3198181047e\"" Feb 9 19:26:57.675591 env[1135]: time="2024-02-09T19:26:57.675539988Z" level=info msg="Container to stop \"d0c0578abd031d2402f6d55dd79dc45f4efe1d3c3c86be6e5b09fdfcb3b81b2b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:26:57.765810 env[1135]: time="2024-02-09T19:26:57.765752682Z" level=info msg="shim disconnected" id=59b0e7fc26a6d63207cfaa6fd7fb742017dcf5d4ab3b26788af3b3198181047e Feb 9 19:26:57.766105 env[1135]: time="2024-02-09T19:26:57.766080947Z" level=warning msg="cleaning up after shim disconnected" id=59b0e7fc26a6d63207cfaa6fd7fb742017dcf5d4ab3b26788af3b3198181047e namespace=k8s.io Feb 9 19:26:57.766203 env[1135]: time="2024-02-09T19:26:57.766187257Z" level=info msg="cleaning up dead shim" Feb 9 19:26:57.775978 env[1135]: time="2024-02-09T19:26:57.775846630Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:26:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3229 runtime=io.containerd.runc.v2\n" Feb 9 19:26:57.776520 env[1135]: time="2024-02-09T19:26:57.776472674Z" level=info msg="TearDown network for sandbox \"59b0e7fc26a6d63207cfaa6fd7fb742017dcf5d4ab3b26788af3b3198181047e\" successfully" Feb 9 19:26:57.776576 env[1135]: time="2024-02-09T19:26:57.776529471Z" level=info msg="StopPodSandbox for \"59b0e7fc26a6d63207cfaa6fd7fb742017dcf5d4ab3b26788af3b3198181047e\" returns successfully" Feb 9 19:26:57.893427 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59b0e7fc26a6d63207cfaa6fd7fb742017dcf5d4ab3b26788af3b3198181047e-rootfs.mount: Deactivated successfully. Feb 9 19:26:57.893888 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-59b0e7fc26a6d63207cfaa6fd7fb742017dcf5d4ab3b26788af3b3198181047e-shm.mount: Deactivated successfully. Feb 9 19:26:57.906104 kubelet[2109]: I0209 19:26:57.906053 2109 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/27a8874a-f11c-45a2-a9d3-c239f4fdac31-xtables-lock\") pod \"27a8874a-f11c-45a2-a9d3-c239f4fdac31\" (UID: \"27a8874a-f11c-45a2-a9d3-c239f4fdac31\") " Feb 9 19:26:57.907319 kubelet[2109]: I0209 19:26:57.906168 2109 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27a8874a-f11c-45a2-a9d3-c239f4fdac31-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "27a8874a-f11c-45a2-a9d3-c239f4fdac31" (UID: "27a8874a-f11c-45a2-a9d3-c239f4fdac31"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:26:57.907662 kubelet[2109]: I0209 19:26:57.907626 2109 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrpqb\" (UniqueName: \"kubernetes.io/projected/27a8874a-f11c-45a2-a9d3-c239f4fdac31-kube-api-access-lrpqb\") pod \"27a8874a-f11c-45a2-a9d3-c239f4fdac31\" (UID: \"27a8874a-f11c-45a2-a9d3-c239f4fdac31\") " Feb 9 19:26:57.908029 kubelet[2109]: I0209 19:26:57.907991 2109 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/27a8874a-f11c-45a2-a9d3-c239f4fdac31-var-run-calico\") pod \"27a8874a-f11c-45a2-a9d3-c239f4fdac31\" (UID: \"27a8874a-f11c-45a2-a9d3-c239f4fdac31\") " Feb 9 19:26:57.908344 kubelet[2109]: I0209 19:26:57.908310 2109 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/27a8874a-f11c-45a2-a9d3-c239f4fdac31-policysync\") pod \"27a8874a-f11c-45a2-a9d3-c239f4fdac31\" (UID: \"27a8874a-f11c-45a2-a9d3-c239f4fdac31\") " Feb 9 19:26:57.908824 kubelet[2109]: I0209 19:26:57.908762 2109 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/27a8874a-f11c-45a2-a9d3-c239f4fdac31-cni-bin-dir\") pod \"27a8874a-f11c-45a2-a9d3-c239f4fdac31\" (UID: \"27a8874a-f11c-45a2-a9d3-c239f4fdac31\") " Feb 9 19:26:57.909061 kubelet[2109]: I0209 19:26:57.908867 2109 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/27a8874a-f11c-45a2-a9d3-c239f4fdac31-flexvol-driver-host\") pod \"27a8874a-f11c-45a2-a9d3-c239f4fdac31\" (UID: \"27a8874a-f11c-45a2-a9d3-c239f4fdac31\") " Feb 9 19:26:57.909208 kubelet[2109]: I0209 19:26:57.909046 2109 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/27a8874a-f11c-45a2-a9d3-c239f4fdac31-cni-log-dir\") pod \"27a8874a-f11c-45a2-a9d3-c239f4fdac31\" (UID: \"27a8874a-f11c-45a2-a9d3-c239f4fdac31\") " Feb 9 19:26:57.909208 kubelet[2109]: I0209 19:26:57.909163 2109 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/27a8874a-f11c-45a2-a9d3-c239f4fdac31-cni-net-dir\") pod \"27a8874a-f11c-45a2-a9d3-c239f4fdac31\" (UID: \"27a8874a-f11c-45a2-a9d3-c239f4fdac31\") " Feb 9 19:26:57.909464 kubelet[2109]: I0209 19:26:57.909254 2109 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27a8874a-f11c-45a2-a9d3-c239f4fdac31-tigera-ca-bundle\") pod \"27a8874a-f11c-45a2-a9d3-c239f4fdac31\" (UID: \"27a8874a-f11c-45a2-a9d3-c239f4fdac31\") " Feb 9 19:26:57.909464 kubelet[2109]: I0209 19:26:57.909335 2109 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/27a8874a-f11c-45a2-a9d3-c239f4fdac31-var-lib-calico\") pod \"27a8874a-f11c-45a2-a9d3-c239f4fdac31\" (UID: \"27a8874a-f11c-45a2-a9d3-c239f4fdac31\") " Feb 9 19:26:57.909464 kubelet[2109]: I0209 19:26:57.909414 2109 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/27a8874a-f11c-45a2-a9d3-c239f4fdac31-lib-modules\") pod \"27a8874a-f11c-45a2-a9d3-c239f4fdac31\" (UID: \"27a8874a-f11c-45a2-a9d3-c239f4fdac31\") " Feb 9 19:26:57.909836 kubelet[2109]: I0209 19:26:57.909497 2109 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/27a8874a-f11c-45a2-a9d3-c239f4fdac31-node-certs\") pod \"27a8874a-f11c-45a2-a9d3-c239f4fdac31\" (UID: \"27a8874a-f11c-45a2-a9d3-c239f4fdac31\") " Feb 9 19:26:57.909836 kubelet[2109]: I0209 19:26:57.909630 2109 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/27a8874a-f11c-45a2-a9d3-c239f4fdac31-xtables-lock\") on node \"ci-3510-3-2-b-76a749f546.novalocal\" DevicePath \"\"" Feb 9 19:26:57.910402 kubelet[2109]: I0209 19:26:57.908594 2109 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27a8874a-f11c-45a2-a9d3-c239f4fdac31-policysync" (OuterVolumeSpecName: "policysync") pod "27a8874a-f11c-45a2-a9d3-c239f4fdac31" (UID: "27a8874a-f11c-45a2-a9d3-c239f4fdac31"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:26:57.910402 kubelet[2109]: I0209 19:26:57.908670 2109 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27a8874a-f11c-45a2-a9d3-c239f4fdac31-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "27a8874a-f11c-45a2-a9d3-c239f4fdac31" (UID: "27a8874a-f11c-45a2-a9d3-c239f4fdac31"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:26:57.910402 kubelet[2109]: I0209 19:26:57.910321 2109 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27a8874a-f11c-45a2-a9d3-c239f4fdac31-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "27a8874a-f11c-45a2-a9d3-c239f4fdac31" (UID: "27a8874a-f11c-45a2-a9d3-c239f4fdac31"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:26:57.910763 kubelet[2109]: I0209 19:26:57.910386 2109 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27a8874a-f11c-45a2-a9d3-c239f4fdac31-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "27a8874a-f11c-45a2-a9d3-c239f4fdac31" (UID: "27a8874a-f11c-45a2-a9d3-c239f4fdac31"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:26:57.910763 kubelet[2109]: I0209 19:26:57.910456 2109 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27a8874a-f11c-45a2-a9d3-c239f4fdac31-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "27a8874a-f11c-45a2-a9d3-c239f4fdac31" (UID: "27a8874a-f11c-45a2-a9d3-c239f4fdac31"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:26:57.910763 kubelet[2109]: I0209 19:26:57.910524 2109 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27a8874a-f11c-45a2-a9d3-c239f4fdac31-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "27a8874a-f11c-45a2-a9d3-c239f4fdac31" (UID: "27a8874a-f11c-45a2-a9d3-c239f4fdac31"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:26:57.911257 kubelet[2109]: W0209 19:26:57.910854 2109 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/27a8874a-f11c-45a2-a9d3-c239f4fdac31/volumes/kubernetes.io~configmap/tigera-ca-bundle: clearQuota called, but quotas disabled Feb 9 19:26:57.911590 kubelet[2109]: I0209 19:26:57.911543 2109 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27a8874a-f11c-45a2-a9d3-c239f4fdac31-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "27a8874a-f11c-45a2-a9d3-c239f4fdac31" (UID: "27a8874a-f11c-45a2-a9d3-c239f4fdac31"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:26:57.911875 kubelet[2109]: I0209 19:26:57.911830 2109 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27a8874a-f11c-45a2-a9d3-c239f4fdac31-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "27a8874a-f11c-45a2-a9d3-c239f4fdac31" (UID: "27a8874a-f11c-45a2-a9d3-c239f4fdac31"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:26:57.912613 kubelet[2109]: I0209 19:26:57.912481 2109 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27a8874a-f11c-45a2-a9d3-c239f4fdac31-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "27a8874a-f11c-45a2-a9d3-c239f4fdac31" (UID: "27a8874a-f11c-45a2-a9d3-c239f4fdac31"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:26:57.925746 systemd[1]: var-lib-kubelet-pods-27a8874a\x2df11c\x2d45a2\x2da9d3\x2dc239f4fdac31-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Feb 9 19:26:57.930978 kubelet[2109]: I0209 19:26:57.930828 2109 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27a8874a-f11c-45a2-a9d3-c239f4fdac31-node-certs" (OuterVolumeSpecName: "node-certs") pod "27a8874a-f11c-45a2-a9d3-c239f4fdac31" (UID: "27a8874a-f11c-45a2-a9d3-c239f4fdac31"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:26:57.938498 systemd[1]: var-lib-kubelet-pods-27a8874a\x2df11c\x2d45a2\x2da9d3\x2dc239f4fdac31-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlrpqb.mount: Deactivated successfully. Feb 9 19:26:57.939207 kubelet[2109]: I0209 19:26:57.939145 2109 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27a8874a-f11c-45a2-a9d3-c239f4fdac31-kube-api-access-lrpqb" (OuterVolumeSpecName: "kube-api-access-lrpqb") pod "27a8874a-f11c-45a2-a9d3-c239f4fdac31" (UID: "27a8874a-f11c-45a2-a9d3-c239f4fdac31"). InnerVolumeSpecName "kube-api-access-lrpqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:26:58.010132 kubelet[2109]: I0209 19:26:58.010065 2109 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-lrpqb\" (UniqueName: \"kubernetes.io/projected/27a8874a-f11c-45a2-a9d3-c239f4fdac31-kube-api-access-lrpqb\") on node \"ci-3510-3-2-b-76a749f546.novalocal\" DevicePath \"\"" Feb 9 19:26:58.010132 kubelet[2109]: I0209 19:26:58.010145 2109 reconciler_common.go:295] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/27a8874a-f11c-45a2-a9d3-c239f4fdac31-var-run-calico\") on node \"ci-3510-3-2-b-76a749f546.novalocal\" DevicePath \"\"" Feb 9 19:26:58.010457 kubelet[2109]: I0209 19:26:58.010183 2109 reconciler_common.go:295] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/27a8874a-f11c-45a2-a9d3-c239f4fdac31-cni-log-dir\") on node \"ci-3510-3-2-b-76a749f546.novalocal\" DevicePath \"\"" Feb 9 19:26:58.010457 kubelet[2109]: I0209 19:26:58.010215 2109 reconciler_common.go:295] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/27a8874a-f11c-45a2-a9d3-c239f4fdac31-policysync\") on node \"ci-3510-3-2-b-76a749f546.novalocal\" DevicePath \"\"" Feb 9 19:26:58.010457 kubelet[2109]: I0209 19:26:58.010246 2109 reconciler_common.go:295] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/27a8874a-f11c-45a2-a9d3-c239f4fdac31-cni-bin-dir\") on node \"ci-3510-3-2-b-76a749f546.novalocal\" DevicePath \"\"" Feb 9 19:26:58.010457 kubelet[2109]: I0209 19:26:58.010298 2109 reconciler_common.go:295] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/27a8874a-f11c-45a2-a9d3-c239f4fdac31-flexvol-driver-host\") on node \"ci-3510-3-2-b-76a749f546.novalocal\" DevicePath \"\"" Feb 9 19:26:58.010457 kubelet[2109]: I0209 19:26:58.010330 2109 reconciler_common.go:295] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/27a8874a-f11c-45a2-a9d3-c239f4fdac31-cni-net-dir\") on node \"ci-3510-3-2-b-76a749f546.novalocal\" DevicePath \"\"" Feb 9 19:26:58.010457 kubelet[2109]: I0209 19:26:58.010362 2109 reconciler_common.go:295] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27a8874a-f11c-45a2-a9d3-c239f4fdac31-tigera-ca-bundle\") on node \"ci-3510-3-2-b-76a749f546.novalocal\" DevicePath \"\"" Feb 9 19:26:58.010457 kubelet[2109]: I0209 19:26:58.010391 2109 reconciler_common.go:295] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/27a8874a-f11c-45a2-a9d3-c239f4fdac31-var-lib-calico\") on node \"ci-3510-3-2-b-76a749f546.novalocal\" DevicePath \"\"" Feb 9 19:26:58.011111 kubelet[2109]: I0209 19:26:58.010420 2109 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/27a8874a-f11c-45a2-a9d3-c239f4fdac31-lib-modules\") on node \"ci-3510-3-2-b-76a749f546.novalocal\" DevicePath \"\"" Feb 9 19:26:58.011111 kubelet[2109]: I0209 19:26:58.010451 2109 reconciler_common.go:295] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/27a8874a-f11c-45a2-a9d3-c239f4fdac31-node-certs\") on node \"ci-3510-3-2-b-76a749f546.novalocal\" DevicePath \"\"" Feb 9 19:26:58.558509 kubelet[2109]: E0209 19:26:58.558442 2109 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x2vsr" podUID=258b5f6f-f507-494e-8282-83a91907d3f5 Feb 9 19:26:58.681134 kubelet[2109]: I0209 19:26:58.681088 2109 scope.go:115] "RemoveContainer" containerID="d0c0578abd031d2402f6d55dd79dc45f4efe1d3c3c86be6e5b09fdfcb3b81b2b" Feb 9 19:26:58.687356 env[1135]: time="2024-02-09T19:26:58.685457497Z" level=info msg="RemoveContainer for \"d0c0578abd031d2402f6d55dd79dc45f4efe1d3c3c86be6e5b09fdfcb3b81b2b\"" Feb 9 19:26:58.710960 env[1135]: time="2024-02-09T19:26:58.709740364Z" level=info msg="RemoveContainer for \"d0c0578abd031d2402f6d55dd79dc45f4efe1d3c3c86be6e5b09fdfcb3b81b2b\" returns successfully" Feb 9 19:26:58.747366 kubelet[2109]: I0209 19:26:58.747318 2109 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:26:58.747575 kubelet[2109]: E0209 19:26:58.747419 2109 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="27a8874a-f11c-45a2-a9d3-c239f4fdac31" containerName="flexvol-driver" Feb 9 19:26:58.747575 kubelet[2109]: I0209 19:26:58.747480 2109 memory_manager.go:346] "RemoveStaleState removing state" podUID="27a8874a-f11c-45a2-a9d3-c239f4fdac31" containerName="flexvol-driver" Feb 9 19:26:58.817353 kubelet[2109]: I0209 19:26:58.817219 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/119a397e-28d5-4ff4-840a-baa93c3e3f00-xtables-lock\") pod \"calico-node-c7vdt\" (UID: \"119a397e-28d5-4ff4-840a-baa93c3e3f00\") " pod="calico-system/calico-node-c7vdt" Feb 9 19:26:58.817353 kubelet[2109]: I0209 19:26:58.817300 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/119a397e-28d5-4ff4-840a-baa93c3e3f00-var-lib-calico\") pod \"calico-node-c7vdt\" (UID: \"119a397e-28d5-4ff4-840a-baa93c3e3f00\") " pod="calico-system/calico-node-c7vdt" Feb 9 19:26:58.817353 kubelet[2109]: I0209 19:26:58.817354 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/119a397e-28d5-4ff4-840a-baa93c3e3f00-lib-modules\") pod \"calico-node-c7vdt\" (UID: \"119a397e-28d5-4ff4-840a-baa93c3e3f00\") " pod="calico-system/calico-node-c7vdt" Feb 9 19:26:58.817614 kubelet[2109]: I0209 19:26:58.817398 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/119a397e-28d5-4ff4-840a-baa93c3e3f00-policysync\") pod \"calico-node-c7vdt\" (UID: \"119a397e-28d5-4ff4-840a-baa93c3e3f00\") " pod="calico-system/calico-node-c7vdt" Feb 9 19:26:58.817614 kubelet[2109]: I0209 19:26:58.817479 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/119a397e-28d5-4ff4-840a-baa93c3e3f00-tigera-ca-bundle\") pod \"calico-node-c7vdt\" (UID: \"119a397e-28d5-4ff4-840a-baa93c3e3f00\") " pod="calico-system/calico-node-c7vdt" Feb 9 19:26:58.817614 kubelet[2109]: I0209 19:26:58.817542 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/119a397e-28d5-4ff4-840a-baa93c3e3f00-var-run-calico\") pod \"calico-node-c7vdt\" (UID: \"119a397e-28d5-4ff4-840a-baa93c3e3f00\") " pod="calico-system/calico-node-c7vdt" Feb 9 19:26:58.817614 kubelet[2109]: I0209 19:26:58.817600 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/119a397e-28d5-4ff4-840a-baa93c3e3f00-cni-log-dir\") pod \"calico-node-c7vdt\" (UID: \"119a397e-28d5-4ff4-840a-baa93c3e3f00\") " pod="calico-system/calico-node-c7vdt" Feb 9 19:26:58.817748 kubelet[2109]: I0209 19:26:58.817660 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj4gm\" (UniqueName: \"kubernetes.io/projected/119a397e-28d5-4ff4-840a-baa93c3e3f00-kube-api-access-bj4gm\") pod \"calico-node-c7vdt\" (UID: \"119a397e-28d5-4ff4-840a-baa93c3e3f00\") " pod="calico-system/calico-node-c7vdt" Feb 9 19:26:58.817786 kubelet[2109]: I0209 19:26:58.817754 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/119a397e-28d5-4ff4-840a-baa93c3e3f00-node-certs\") pod \"calico-node-c7vdt\" (UID: \"119a397e-28d5-4ff4-840a-baa93c3e3f00\") " pod="calico-system/calico-node-c7vdt" Feb 9 19:26:58.817823 kubelet[2109]: I0209 19:26:58.817809 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/119a397e-28d5-4ff4-840a-baa93c3e3f00-cni-bin-dir\") pod \"calico-node-c7vdt\" (UID: \"119a397e-28d5-4ff4-840a-baa93c3e3f00\") " pod="calico-system/calico-node-c7vdt" Feb 9 19:26:58.818418 kubelet[2109]: I0209 19:26:58.817864 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/119a397e-28d5-4ff4-840a-baa93c3e3f00-cni-net-dir\") pod \"calico-node-c7vdt\" (UID: \"119a397e-28d5-4ff4-840a-baa93c3e3f00\") " pod="calico-system/calico-node-c7vdt" Feb 9 19:26:58.818418 kubelet[2109]: I0209 19:26:58.817986 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/119a397e-28d5-4ff4-840a-baa93c3e3f00-flexvol-driver-host\") pod \"calico-node-c7vdt\" (UID: \"119a397e-28d5-4ff4-840a-baa93c3e3f00\") " pod="calico-system/calico-node-c7vdt" Feb 9 19:26:59.067720 env[1135]: time="2024-02-09T19:26:59.067010344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-c7vdt,Uid:119a397e-28d5-4ff4-840a-baa93c3e3f00,Namespace:calico-system,Attempt:0,}" Feb 9 19:26:59.110495 env[1135]: time="2024-02-09T19:26:59.102083874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:26:59.110495 env[1135]: time="2024-02-09T19:26:59.102132134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:26:59.110495 env[1135]: time="2024-02-09T19:26:59.102146471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:26:59.110495 env[1135]: time="2024-02-09T19:26:59.102279180Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d2844f52eaae4956fdef06e8b743cab33bed8045e416744facfcf6ece1333c5e pid=3259 runtime=io.containerd.runc.v2 Feb 9 19:26:59.173135 env[1135]: time="2024-02-09T19:26:59.173051595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-c7vdt,Uid:119a397e-28d5-4ff4-840a-baa93c3e3f00,Namespace:calico-system,Attempt:0,} returns sandbox id \"d2844f52eaae4956fdef06e8b743cab33bed8045e416744facfcf6ece1333c5e\"" Feb 9 19:26:59.176291 env[1135]: time="2024-02-09T19:26:59.176224087Z" level=info msg="CreateContainer within sandbox \"d2844f52eaae4956fdef06e8b743cab33bed8045e416744facfcf6ece1333c5e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 9 19:26:59.236770 env[1135]: time="2024-02-09T19:26:59.236706597Z" level=info msg="CreateContainer within sandbox \"d2844f52eaae4956fdef06e8b743cab33bed8045e416744facfcf6ece1333c5e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2eee6dd047fb7071ef4ba669b18528a423e12b76cc8824f8653006c12d543d62\"" Feb 9 19:26:59.237616 env[1135]: time="2024-02-09T19:26:59.237516336Z" level=info msg="StartContainer for \"2eee6dd047fb7071ef4ba669b18528a423e12b76cc8824f8653006c12d543d62\"" Feb 9 19:26:59.319946 env[1135]: time="2024-02-09T19:26:59.319806049Z" level=info msg="StartContainer for \"2eee6dd047fb7071ef4ba669b18528a423e12b76cc8824f8653006c12d543d62\" returns successfully" Feb 9 19:26:59.503073 env[1135]: time="2024-02-09T19:26:59.503015774Z" level=info msg="shim disconnected" id=2eee6dd047fb7071ef4ba669b18528a423e12b76cc8824f8653006c12d543d62 Feb 9 19:26:59.503417 env[1135]: time="2024-02-09T19:26:59.503381812Z" level=warning msg="cleaning up after shim disconnected" id=2eee6dd047fb7071ef4ba669b18528a423e12b76cc8824f8653006c12d543d62 namespace=k8s.io Feb 9 19:26:59.503519 env[1135]: time="2024-02-09T19:26:59.503502157Z" level=info msg="cleaning up dead shim" Feb 9 19:26:59.516436 env[1135]: time="2024-02-09T19:26:59.516362865Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:26:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3333 runtime=io.containerd.runc.v2\n" Feb 9 19:26:59.700574 env[1135]: time="2024-02-09T19:26:59.700130598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\"" Feb 9 19:26:59.931591 systemd[1]: run-containerd-runc-k8s.io-d2844f52eaae4956fdef06e8b743cab33bed8045e416744facfcf6ece1333c5e-runc.SyHVEm.mount: Deactivated successfully. Feb 9 19:27:00.557772 kubelet[2109]: E0209 19:27:00.557698 2109 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x2vsr" podUID=258b5f6f-f507-494e-8282-83a91907d3f5 Feb 9 19:27:00.569316 kubelet[2109]: I0209 19:27:00.569185 2109 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=27a8874a-f11c-45a2-a9d3-c239f4fdac31 path="/var/lib/kubelet/pods/27a8874a-f11c-45a2-a9d3-c239f4fdac31/volumes" Feb 9 19:27:01.769215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2515302767.mount: Deactivated successfully. Feb 9 19:27:02.560634 kubelet[2109]: E0209 19:27:02.559117 2109 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x2vsr" podUID=258b5f6f-f507-494e-8282-83a91907d3f5 Feb 9 19:27:04.559367 kubelet[2109]: E0209 19:27:04.559287 2109 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x2vsr" podUID=258b5f6f-f507-494e-8282-83a91907d3f5 Feb 9 19:27:06.561352 kubelet[2109]: E0209 19:27:06.557394 2109 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x2vsr" podUID=258b5f6f-f507-494e-8282-83a91907d3f5 Feb 9 19:27:08.556875 kubelet[2109]: E0209 19:27:08.556825 2109 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x2vsr" podUID=258b5f6f-f507-494e-8282-83a91907d3f5 Feb 9 19:27:08.675984 env[1135]: time="2024-02-09T19:27:08.675805233Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:27:08.682171 env[1135]: time="2024-02-09T19:27:08.682106157Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8e8d96a874c0e2f137bc6e0ff4b9da4ac2341852e41d99ab81983d329bb87d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:27:08.689337 env[1135]: time="2024-02-09T19:27:08.689283792Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:27:08.695282 env[1135]: time="2024-02-09T19:27:08.695191586Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:d943b4c23e82a39b0186a1a3b2fe8f728e543d503df72d7be521501a82b7e7b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:27:08.698269 env[1135]: time="2024-02-09T19:27:08.698180127Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\" returns image reference \"sha256:8e8d96a874c0e2f137bc6e0ff4b9da4ac2341852e41d99ab81983d329bb87d93\"" Feb 9 19:27:08.707385 env[1135]: time="2024-02-09T19:27:08.707278874Z" level=info msg="CreateContainer within sandbox \"d2844f52eaae4956fdef06e8b743cab33bed8045e416744facfcf6ece1333c5e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 9 19:27:08.736581 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1680850829.mount: Deactivated successfully. Feb 9 19:27:08.747280 env[1135]: time="2024-02-09T19:27:08.747165890Z" level=info msg="CreateContainer within sandbox \"d2844f52eaae4956fdef06e8b743cab33bed8045e416744facfcf6ece1333c5e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"37f84f5f53af497f61f48de2e78d73983282aaee343efbc38906df7176f8850f\"" Feb 9 19:27:08.752693 env[1135]: time="2024-02-09T19:27:08.748638276Z" level=info msg="StartContainer for \"37f84f5f53af497f61f48de2e78d73983282aaee343efbc38906df7176f8850f\"" Feb 9 19:27:08.969235 env[1135]: time="2024-02-09T19:27:08.968739663Z" level=info msg="StartContainer for \"37f84f5f53af497f61f48de2e78d73983282aaee343efbc38906df7176f8850f\" returns successfully" Feb 9 19:27:10.557738 kubelet[2109]: E0209 19:27:10.557690 2109 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x2vsr" podUID=258b5f6f-f507-494e-8282-83a91907d3f5 Feb 9 19:27:10.981381 kubelet[2109]: I0209 19:27:10.979555 2109 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 19:27:11.022227 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-37f84f5f53af497f61f48de2e78d73983282aaee343efbc38906df7176f8850f-rootfs.mount: Deactivated successfully. Feb 9 19:27:11.033366 kubelet[2109]: I0209 19:27:11.031233 2109 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:27:11.045251 kubelet[2109]: I0209 19:27:11.045226 2109 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:27:11.045570 kubelet[2109]: I0209 19:27:11.045555 2109 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:27:11.053367 env[1135]: time="2024-02-09T19:27:11.053123437Z" level=info msg="shim disconnected" id=37f84f5f53af497f61f48de2e78d73983282aaee343efbc38906df7176f8850f Feb 9 19:27:11.053367 env[1135]: time="2024-02-09T19:27:11.053192076Z" level=warning msg="cleaning up after shim disconnected" id=37f84f5f53af497f61f48de2e78d73983282aaee343efbc38906df7176f8850f namespace=k8s.io Feb 9 19:27:11.053367 env[1135]: time="2024-02-09T19:27:11.053204470Z" level=info msg="cleaning up dead shim" Feb 9 19:27:11.076929 env[1135]: time="2024-02-09T19:27:11.074785333Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:27:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3409 runtime=io.containerd.runc.v2\n" Feb 9 19:27:11.129177 kubelet[2109]: I0209 19:27:11.129156 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9a58a43-cc9b-49e7-89c7-8d2f444dd31a-tigera-ca-bundle\") pod \"calico-kube-controllers-6f585564b5-stdvx\" (UID: \"d9a58a43-cc9b-49e7-89c7-8d2f444dd31a\") " pod="calico-system/calico-kube-controllers-6f585564b5-stdvx" Feb 9 19:27:11.129412 kubelet[2109]: I0209 19:27:11.129400 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93fdcc9f-1773-437d-8e12-9052cf2f26e5-config-volume\") pod \"coredns-787d4945fb-gcw56\" (UID: \"93fdcc9f-1773-437d-8e12-9052cf2f26e5\") " pod="kube-system/coredns-787d4945fb-gcw56" Feb 9 19:27:11.129527 kubelet[2109]: I0209 19:27:11.129517 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/856ba022-e379-4cd0-87a4-cdfa313ac255-config-volume\") pod \"coredns-787d4945fb-f8g65\" (UID: \"856ba022-e379-4cd0-87a4-cdfa313ac255\") " pod="kube-system/coredns-787d4945fb-f8g65" Feb 9 19:27:11.129639 kubelet[2109]: I0209 19:27:11.129628 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h4n7\" (UniqueName: \"kubernetes.io/projected/93fdcc9f-1773-437d-8e12-9052cf2f26e5-kube-api-access-2h4n7\") pod \"coredns-787d4945fb-gcw56\" (UID: \"93fdcc9f-1773-437d-8e12-9052cf2f26e5\") " pod="kube-system/coredns-787d4945fb-gcw56" Feb 9 19:27:11.129750 kubelet[2109]: I0209 19:27:11.129740 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pglh\" (UniqueName: \"kubernetes.io/projected/d9a58a43-cc9b-49e7-89c7-8d2f444dd31a-kube-api-access-4pglh\") pod \"calico-kube-controllers-6f585564b5-stdvx\" (UID: \"d9a58a43-cc9b-49e7-89c7-8d2f444dd31a\") " pod="calico-system/calico-kube-controllers-6f585564b5-stdvx" Feb 9 19:27:11.129859 kubelet[2109]: I0209 19:27:11.129849 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht8d9\" (UniqueName: \"kubernetes.io/projected/856ba022-e379-4cd0-87a4-cdfa313ac255-kube-api-access-ht8d9\") pod \"coredns-787d4945fb-f8g65\" (UID: \"856ba022-e379-4cd0-87a4-cdfa313ac255\") " pod="kube-system/coredns-787d4945fb-f8g65" Feb 9 19:27:11.335424 env[1135]: time="2024-02-09T19:27:11.335245100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-gcw56,Uid:93fdcc9f-1773-437d-8e12-9052cf2f26e5,Namespace:kube-system,Attempt:0,}" Feb 9 19:27:11.349467 env[1135]: time="2024-02-09T19:27:11.349375729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f585564b5-stdvx,Uid:d9a58a43-cc9b-49e7-89c7-8d2f444dd31a,Namespace:calico-system,Attempt:0,}" Feb 9 19:27:11.360181 env[1135]: time="2024-02-09T19:27:11.359648300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-f8g65,Uid:856ba022-e379-4cd0-87a4-cdfa313ac255,Namespace:kube-system,Attempt:0,}" Feb 9 19:27:11.538378 env[1135]: time="2024-02-09T19:27:11.538315308Z" level=error msg="Failed to destroy network for sandbox \"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:27:11.538951 env[1135]: time="2024-02-09T19:27:11.538917361Z" level=error msg="encountered an error cleaning up failed sandbox \"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:27:11.539085 env[1135]: time="2024-02-09T19:27:11.539055132Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f585564b5-stdvx,Uid:d9a58a43-cc9b-49e7-89c7-8d2f444dd31a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:27:11.540717 kubelet[2109]: E0209 19:27:11.539373 2109 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:27:11.540717 kubelet[2109]: E0209 19:27:11.539432 2109 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f585564b5-stdvx" Feb 9 19:27:11.540717 kubelet[2109]: E0209 19:27:11.539458 2109 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f585564b5-stdvx" Feb 9 19:27:11.540865 kubelet[2109]: E0209 19:27:11.539514 2109 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6f585564b5-stdvx_calico-system(d9a58a43-cc9b-49e7-89c7-8d2f444dd31a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6f585564b5-stdvx_calico-system(d9a58a43-cc9b-49e7-89c7-8d2f444dd31a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f585564b5-stdvx" podUID=d9a58a43-cc9b-49e7-89c7-8d2f444dd31a Feb 9 19:27:11.542461 env[1135]: time="2024-02-09T19:27:11.542369835Z" level=error msg="Failed to destroy network for sandbox \"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:27:11.543839 env[1135]: time="2024-02-09T19:27:11.543279139Z" level=error msg="encountered an error cleaning up failed sandbox \"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:27:11.543839 env[1135]: time="2024-02-09T19:27:11.543338610Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-f8g65,Uid:856ba022-e379-4cd0-87a4-cdfa313ac255,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:27:11.544005 kubelet[2109]: E0209 19:27:11.543587 2109 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:27:11.544005 kubelet[2109]: E0209 19:27:11.543660 2109 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-f8g65" Feb 9 19:27:11.544005 kubelet[2109]: E0209 19:27:11.543696 2109 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-f8g65" Feb 9 19:27:11.544101 kubelet[2109]: E0209 19:27:11.543825 2109 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-f8g65_kube-system(856ba022-e379-4cd0-87a4-cdfa313ac255)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-f8g65_kube-system(856ba022-e379-4cd0-87a4-cdfa313ac255)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-f8g65" podUID=856ba022-e379-4cd0-87a4-cdfa313ac255 Feb 9 19:27:11.551948 env[1135]: time="2024-02-09T19:27:11.551864682Z" level=error msg="Failed to destroy network for sandbox \"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:27:11.552368 env[1135]: time="2024-02-09T19:27:11.552338775Z" level=error msg="encountered an error cleaning up failed sandbox \"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:27:11.552488 env[1135]: time="2024-02-09T19:27:11.552459693Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-gcw56,Uid:93fdcc9f-1773-437d-8e12-9052cf2f26e5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:27:11.553881 kubelet[2109]: E0209 19:27:11.552721 2109 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:27:11.553881 kubelet[2109]: E0209 19:27:11.552767 2109 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-gcw56" Feb 9 19:27:11.553881 kubelet[2109]: E0209 19:27:11.552791 2109 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-gcw56" Feb 9 19:27:11.554038 kubelet[2109]: E0209 19:27:11.552842 2109 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-gcw56_kube-system(93fdcc9f-1773-437d-8e12-9052cf2f26e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-gcw56_kube-system(93fdcc9f-1773-437d-8e12-9052cf2f26e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-gcw56" podUID=93fdcc9f-1773-437d-8e12-9052cf2f26e5 Feb 9 19:27:11.722789 kubelet[2109]: I0209 19:27:11.722682 2109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26" Feb 9 19:27:11.724499 env[1135]: time="2024-02-09T19:27:11.724395760Z" level=info msg="StopPodSandbox for \"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\"" Feb 9 19:27:11.744093 env[1135]: time="2024-02-09T19:27:11.744030097Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\"" Feb 9 19:27:11.752986 kubelet[2109]: I0209 19:27:11.752011 2109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589" Feb 9 19:27:11.765052 env[1135]: time="2024-02-09T19:27:11.762363964Z" level=info msg="StopPodSandbox for \"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\"" Feb 9 19:27:11.766519 kubelet[2109]: I0209 19:27:11.766460 2109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281" Feb 9 19:27:11.781071 env[1135]: time="2024-02-09T19:27:11.776833471Z" level=info msg="StopPodSandbox for \"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\"" Feb 9 19:27:11.840378 env[1135]: time="2024-02-09T19:27:11.840304737Z" level=error msg="StopPodSandbox for \"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\" failed" error="failed to destroy network for sandbox \"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:27:11.840780 kubelet[2109]: E0209 19:27:11.840750 2109 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281" Feb 9 19:27:11.840858 kubelet[2109]: E0209 19:27:11.840799 2109 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281} Feb 9 19:27:11.840858 kubelet[2109]: E0209 19:27:11.840846 2109 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"93fdcc9f-1773-437d-8e12-9052cf2f26e5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:27:11.841035 kubelet[2109]: E0209 19:27:11.840883 2109 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"93fdcc9f-1773-437d-8e12-9052cf2f26e5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-gcw56" podUID=93fdcc9f-1773-437d-8e12-9052cf2f26e5 Feb 9 19:27:11.854440 env[1135]: time="2024-02-09T19:27:11.854379010Z" level=error msg="StopPodSandbox for \"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\" failed" error="failed to destroy network for sandbox \"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:27:11.854625 kubelet[2109]: E0209 19:27:11.854604 2109 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26" Feb 9 19:27:11.855184 kubelet[2109]: E0209 19:27:11.854645 2109 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26} Feb 9 19:27:11.855184 kubelet[2109]: E0209 19:27:11.854693 2109 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d9a58a43-cc9b-49e7-89c7-8d2f444dd31a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:27:11.855184 kubelet[2109]: E0209 19:27:11.854745 2109 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d9a58a43-cc9b-49e7-89c7-8d2f444dd31a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f585564b5-stdvx" podUID=d9a58a43-cc9b-49e7-89c7-8d2f444dd31a Feb 9 19:27:11.860548 env[1135]: time="2024-02-09T19:27:11.860472849Z" level=error msg="StopPodSandbox for \"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\" failed" error="failed to destroy network for sandbox \"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:27:11.860968 kubelet[2109]: E0209 19:27:11.860788 2109 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589" Feb 9 19:27:11.860968 kubelet[2109]: E0209 19:27:11.860828 2109 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589} Feb 9 19:27:11.860968 kubelet[2109]: E0209 19:27:11.860888 2109 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"856ba022-e379-4cd0-87a4-cdfa313ac255\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:27:11.860968 kubelet[2109]: E0209 19:27:11.860951 2109 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"856ba022-e379-4cd0-87a4-cdfa313ac255\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-f8g65" podUID=856ba022-e379-4cd0-87a4-cdfa313ac255 Feb 9 19:27:12.026669 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281-shm.mount: Deactivated successfully. Feb 9 19:27:12.564833 env[1135]: time="2024-02-09T19:27:12.564739610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x2vsr,Uid:258b5f6f-f507-494e-8282-83a91907d3f5,Namespace:calico-system,Attempt:0,}" Feb 9 19:27:12.706442 env[1135]: time="2024-02-09T19:27:12.706304565Z" level=error msg="Failed to destroy network for sandbox \"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:27:12.710234 env[1135]: time="2024-02-09T19:27:12.710179083Z" level=error msg="encountered an error cleaning up failed sandbox \"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:27:12.710381 env[1135]: time="2024-02-09T19:27:12.710346678Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x2vsr,Uid:258b5f6f-f507-494e-8282-83a91907d3f5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:27:12.711562 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e-shm.mount: Deactivated successfully. Feb 9 19:27:12.712478 kubelet[2109]: E0209 19:27:12.712071 2109 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:27:12.712478 kubelet[2109]: E0209 19:27:12.712138 2109 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x2vsr" Feb 9 19:27:12.712478 kubelet[2109]: E0209 19:27:12.712165 2109 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x2vsr" Feb 9 19:27:12.714418 kubelet[2109]: E0209 19:27:12.712235 2109 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-x2vsr_calico-system(258b5f6f-f507-494e-8282-83a91907d3f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-x2vsr_calico-system(258b5f6f-f507-494e-8282-83a91907d3f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x2vsr" podUID=258b5f6f-f507-494e-8282-83a91907d3f5 Feb 9 19:27:12.774339 kubelet[2109]: I0209 19:27:12.770740 2109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e" Feb 9 19:27:12.775213 env[1135]: time="2024-02-09T19:27:12.772804481Z" level=info msg="StopPodSandbox for \"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\"" Feb 9 19:27:12.840682 env[1135]: time="2024-02-09T19:27:12.839837952Z" level=error msg="StopPodSandbox for \"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\" failed" error="failed to destroy network for sandbox \"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:27:12.840958 kubelet[2109]: E0209 19:27:12.840485 2109 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e" Feb 9 19:27:12.840958 kubelet[2109]: E0209 19:27:12.840550 2109 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e} Feb 9 19:27:12.840958 kubelet[2109]: E0209 19:27:12.840590 2109 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"258b5f6f-f507-494e-8282-83a91907d3f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:27:12.840958 kubelet[2109]: E0209 19:27:12.840634 2109 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"258b5f6f-f507-494e-8282-83a91907d3f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x2vsr" podUID=258b5f6f-f507-494e-8282-83a91907d3f5 Feb 9 19:27:23.537381 kubelet[2109]: I0209 19:27:23.536841 2109 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 9 19:27:23.561075 env[1135]: time="2024-02-09T19:27:23.561001275Z" level=info msg="StopPodSandbox for \"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\"" Feb 9 19:27:23.567106 env[1135]: time="2024-02-09T19:27:23.566996034Z" level=info msg="StopPodSandbox for \"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\"" Feb 9 19:27:23.571499 env[1135]: time="2024-02-09T19:27:23.571153426Z" level=info msg="StopPodSandbox for \"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\"" Feb 9 19:27:23.683084 env[1135]: time="2024-02-09T19:27:23.683022740Z" level=error msg="StopPodSandbox for \"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\" failed" error="failed to destroy network for sandbox \"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:27:23.683686 kubelet[2109]: E0209 19:27:23.683497 2109 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589" Feb 9 19:27:23.683686 kubelet[2109]: E0209 19:27:23.683541 2109 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589} Feb 9 19:27:23.683686 kubelet[2109]: E0209 19:27:23.683589 2109 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"856ba022-e379-4cd0-87a4-cdfa313ac255\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:27:23.683686 kubelet[2109]: E0209 19:27:23.683639 2109 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"856ba022-e379-4cd0-87a4-cdfa313ac255\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-f8g65" podUID=856ba022-e379-4cd0-87a4-cdfa313ac255 Feb 9 19:27:23.687238 env[1135]: time="2024-02-09T19:27:23.687196514Z" level=error msg="StopPodSandbox for \"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\" failed" error="failed to destroy network for sandbox \"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:27:23.687755 kubelet[2109]: E0209 19:27:23.687475 2109 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281" Feb 9 19:27:23.687755 kubelet[2109]: E0209 19:27:23.687506 2109 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281} Feb 9 19:27:23.687755 kubelet[2109]: E0209 19:27:23.687550 2109 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"93fdcc9f-1773-437d-8e12-9052cf2f26e5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:27:23.687755 kubelet[2109]: E0209 19:27:23.687586 2109 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"93fdcc9f-1773-437d-8e12-9052cf2f26e5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-gcw56" podUID=93fdcc9f-1773-437d-8e12-9052cf2f26e5 Feb 9 19:27:23.697724 env[1135]: time="2024-02-09T19:27:23.697677213Z" level=error msg="StopPodSandbox for \"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\" failed" error="failed to destroy network for sandbox \"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:27:23.701408 kubelet[2109]: E0209 19:27:23.698032 2109 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26" Feb 9 19:27:23.701408 kubelet[2109]: E0209 19:27:23.698066 2109 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26} Feb 9 19:27:23.701408 kubelet[2109]: E0209 19:27:23.698109 2109 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d9a58a43-cc9b-49e7-89c7-8d2f444dd31a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:27:23.701408 kubelet[2109]: E0209 19:27:23.698146 2109 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d9a58a43-cc9b-49e7-89c7-8d2f444dd31a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f585564b5-stdvx" podUID=d9a58a43-cc9b-49e7-89c7-8d2f444dd31a Feb 9 19:27:23.796304 kernel: kauditd_printk_skb: 8 callbacks suppressed Feb 9 19:27:23.796497 kernel: audit: type=1325 audit(1707506843.791:294): table=filter:117 family=2 entries=13 op=nft_register_rule pid=3712 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:27:23.791000 audit[3712]: NETFILTER_CFG table=filter:117 family=2 entries=13 op=nft_register_rule pid=3712 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:27:23.804709 kernel: audit: type=1300 audit(1707506843.791:294): arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffed83c3790 a2=0 a3=7ffed83c377c items=0 ppid=2269 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:23.791000 audit[3712]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffed83c3790 a2=0 a3=7ffed83c377c items=0 ppid=2269 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:23.791000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:27:23.821035 kernel: audit: type=1327 audit(1707506843.791:294): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:27:23.822000 audit[3712]: NETFILTER_CFG table=nat:118 family=2 entries=27 op=nft_register_chain pid=3712 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:27:23.822000 audit[3712]: SYSCALL arch=c000003e syscall=46 success=yes exit=8836 a0=3 a1=7ffed83c3790 a2=0 a3=7ffed83c377c items=0 ppid=2269 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:23.844074 kernel: audit: type=1325 audit(1707506843.822:295): table=nat:118 family=2 entries=27 op=nft_register_chain pid=3712 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:27:23.844193 kernel: audit: type=1300 audit(1707506843.822:295): arch=c000003e syscall=46 success=yes exit=8836 a0=3 a1=7ffed83c3790 a2=0 a3=7ffed83c377c items=0 ppid=2269 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:23.822000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:27:23.850687 kernel: audit: type=1327 audit(1707506843.822:295): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:27:24.487345 env[1135]: time="2024-02-09T19:27:24.487279589Z" level=info msg="StopPodSandbox for \"59b0e7fc26a6d63207cfaa6fd7fb742017dcf5d4ab3b26788af3b3198181047e\"" Feb 9 19:27:24.487534 env[1135]: time="2024-02-09T19:27:24.487438748Z" level=info msg="TearDown network for sandbox \"59b0e7fc26a6d63207cfaa6fd7fb742017dcf5d4ab3b26788af3b3198181047e\" successfully" Feb 9 19:27:24.487534 env[1135]: time="2024-02-09T19:27:24.487516385Z" level=info msg="StopPodSandbox for \"59b0e7fc26a6d63207cfaa6fd7fb742017dcf5d4ab3b26788af3b3198181047e\" returns successfully" Feb 9 19:27:24.488674 env[1135]: time="2024-02-09T19:27:24.488617165Z" level=info msg="RemovePodSandbox for \"59b0e7fc26a6d63207cfaa6fd7fb742017dcf5d4ab3b26788af3b3198181047e\"" Feb 9 19:27:24.488821 env[1135]: time="2024-02-09T19:27:24.488689120Z" level=info msg="Forcibly stopping sandbox \"59b0e7fc26a6d63207cfaa6fd7fb742017dcf5d4ab3b26788af3b3198181047e\"" Feb 9 19:27:24.488941 env[1135]: time="2024-02-09T19:27:24.488823253Z" level=info msg="TearDown network for sandbox \"59b0e7fc26a6d63207cfaa6fd7fb742017dcf5d4ab3b26788af3b3198181047e\" successfully" Feb 9 19:27:24.551998 env[1135]: time="2024-02-09T19:27:24.551792521Z" level=info msg="RemovePodSandbox \"59b0e7fc26a6d63207cfaa6fd7fb742017dcf5d4ab3b26788af3b3198181047e\" returns successfully" Feb 9 19:27:24.553044 env[1135]: time="2024-02-09T19:27:24.552766954Z" level=info msg="StopPodSandbox for \"3d3e500e39e6b69b7a890b6f492abfe024e778a295ccf9e2036aa0c27c1097a9\"" Feb 9 19:27:24.553184 env[1135]: time="2024-02-09T19:27:24.553040448Z" level=info msg="TearDown network for sandbox \"3d3e500e39e6b69b7a890b6f492abfe024e778a295ccf9e2036aa0c27c1097a9\" successfully" Feb 9 19:27:24.553184 env[1135]: time="2024-02-09T19:27:24.553126140Z" level=info msg="StopPodSandbox for \"3d3e500e39e6b69b7a890b6f492abfe024e778a295ccf9e2036aa0c27c1097a9\" returns successfully" Feb 9 19:27:24.553673 env[1135]: time="2024-02-09T19:27:24.553591947Z" level=info msg="RemovePodSandbox for \"3d3e500e39e6b69b7a890b6f492abfe024e778a295ccf9e2036aa0c27c1097a9\"" Feb 9 19:27:24.553881 env[1135]: time="2024-02-09T19:27:24.553668630Z" level=info msg="Forcibly stopping sandbox \"3d3e500e39e6b69b7a890b6f492abfe024e778a295ccf9e2036aa0c27c1097a9\"" Feb 9 19:27:24.553881 env[1135]: time="2024-02-09T19:27:24.553801701Z" level=info msg="TearDown network for sandbox \"3d3e500e39e6b69b7a890b6f492abfe024e778a295ccf9e2036aa0c27c1097a9\" successfully" Feb 9 19:27:24.565529 env[1135]: time="2024-02-09T19:27:24.565379213Z" level=info msg="RemovePodSandbox \"3d3e500e39e6b69b7a890b6f492abfe024e778a295ccf9e2036aa0c27c1097a9\" returns successfully" Feb 9 19:27:25.559077 env[1135]: time="2024-02-09T19:27:25.558987838Z" level=info msg="StopPodSandbox for \"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\"" Feb 9 19:27:25.637618 env[1135]: time="2024-02-09T19:27:25.637510796Z" level=error msg="StopPodSandbox for \"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\" failed" error="failed to destroy network for sandbox \"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:27:25.638410 kubelet[2109]: E0209 19:27:25.638053 2109 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e" Feb 9 19:27:25.638410 kubelet[2109]: E0209 19:27:25.638243 2109 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e} Feb 9 19:27:25.639144 kubelet[2109]: E0209 19:27:25.638442 2109 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"258b5f6f-f507-494e-8282-83a91907d3f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:27:25.639144 kubelet[2109]: E0209 19:27:25.638565 2109 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"258b5f6f-f507-494e-8282-83a91907d3f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x2vsr" podUID=258b5f6f-f507-494e-8282-83a91907d3f5 Feb 9 19:27:34.566338 env[1135]: time="2024-02-09T19:27:34.560974719Z" level=info msg="StopPodSandbox for \"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\"" Feb 9 19:27:34.634509 env[1135]: time="2024-02-09T19:27:34.634440010Z" level=error msg="StopPodSandbox for \"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\" failed" error="failed to destroy network for sandbox \"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:27:34.635138 kubelet[2109]: E0209 19:27:34.634884 2109 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589" Feb 9 19:27:34.635138 kubelet[2109]: E0209 19:27:34.634988 2109 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589} Feb 9 19:27:34.635138 kubelet[2109]: E0209 19:27:34.635081 2109 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"856ba022-e379-4cd0-87a4-cdfa313ac255\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:27:34.636304 kubelet[2109]: E0209 19:27:34.635495 2109 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"856ba022-e379-4cd0-87a4-cdfa313ac255\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-f8g65" podUID=856ba022-e379-4cd0-87a4-cdfa313ac255 Feb 9 19:27:36.565976 env[1135]: time="2024-02-09T19:27:36.565841293Z" level=info msg="StopPodSandbox for \"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\"" Feb 9 19:27:36.645630 env[1135]: time="2024-02-09T19:27:36.645494099Z" level=error msg="StopPodSandbox for \"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\" failed" error="failed to destroy network for sandbox \"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:27:36.646544 kubelet[2109]: E0209 19:27:36.646215 2109 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26" Feb 9 19:27:36.646544 kubelet[2109]: E0209 19:27:36.646317 2109 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26} Feb 9 19:27:36.646544 kubelet[2109]: E0209 19:27:36.646413 2109 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d9a58a43-cc9b-49e7-89c7-8d2f444dd31a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:27:36.646544 kubelet[2109]: E0209 19:27:36.646491 2109 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d9a58a43-cc9b-49e7-89c7-8d2f444dd31a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f585564b5-stdvx" podUID=d9a58a43-cc9b-49e7-89c7-8d2f444dd31a Feb 9 19:27:37.561059 env[1135]: time="2024-02-09T19:27:37.560979887Z" level=info msg="StopPodSandbox for \"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\"" Feb 9 19:27:37.632700 env[1135]: time="2024-02-09T19:27:37.632584407Z" level=error msg="StopPodSandbox for \"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\" failed" error="failed to destroy network for sandbox \"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:27:37.634424 kubelet[2109]: E0209 19:27:37.634337 2109 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281" Feb 9 19:27:37.634604 kubelet[2109]: E0209 19:27:37.634433 2109 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281} Feb 9 19:27:37.634604 kubelet[2109]: E0209 19:27:37.634529 2109 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"93fdcc9f-1773-437d-8e12-9052cf2f26e5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:27:37.635020 kubelet[2109]: E0209 19:27:37.634609 2109 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"93fdcc9f-1773-437d-8e12-9052cf2f26e5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-gcw56" podUID=93fdcc9f-1773-437d-8e12-9052cf2f26e5 Feb 9 19:27:38.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.24.4.217:22-172.24.4.1:50354 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:38.888610 kernel: audit: type=1130 audit(1707506858.882:296): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.24.4.217:22-172.24.4.1:50354 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:38.882955 systemd[1]: Started sshd@7-172.24.4.217:22-172.24.4.1:50354.service. Feb 9 19:27:39.559957 env[1135]: time="2024-02-09T19:27:39.559923734Z" level=info msg="StopPodSandbox for \"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\"" Feb 9 19:27:39.620786 env[1135]: time="2024-02-09T19:27:39.620680010Z" level=error msg="StopPodSandbox for \"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\" failed" error="failed to destroy network for sandbox \"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:27:39.622718 kubelet[2109]: E0209 19:27:39.622399 2109 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e" Feb 9 19:27:39.622718 kubelet[2109]: E0209 19:27:39.622494 2109 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e} Feb 9 19:27:39.622718 kubelet[2109]: E0209 19:27:39.622596 2109 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"258b5f6f-f507-494e-8282-83a91907d3f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:27:39.622718 kubelet[2109]: E0209 19:27:39.622687 2109 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"258b5f6f-f507-494e-8282-83a91907d3f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x2vsr" podUID=258b5f6f-f507-494e-8282-83a91907d3f5 Feb 9 19:27:40.425000 audit[3792]: USER_ACCT pid=3792 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:27:40.448883 kernel: audit: type=1101 audit(1707506860.425:297): pid=3792 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:27:40.449028 kernel: audit: type=1103 audit(1707506860.437:298): pid=3792 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:27:40.449085 kernel: audit: type=1006 audit(1707506860.437:299): pid=3792 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Feb 9 19:27:40.437000 audit[3792]: CRED_ACQ pid=3792 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:27:40.449300 sshd[3792]: Accepted publickey for core from 172.24.4.1 port 50354 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:27:40.439332 sshd[3792]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:40.437000 audit[3792]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe77f5c0a0 a2=3 a3=0 items=0 ppid=1 pid=3792 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:40.466680 kernel: audit: type=1300 audit(1707506860.437:299): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe77f5c0a0 a2=3 a3=0 items=0 ppid=1 pid=3792 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:40.437000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:27:40.471954 kernel: audit: type=1327 audit(1707506860.437:299): proctitle=737368643A20636F7265205B707269765D Feb 9 19:27:40.478289 systemd-logind[1122]: New session 8 of user core. Feb 9 19:27:40.480071 systemd[1]: Started session-8.scope. Feb 9 19:27:40.496000 audit[3792]: USER_START pid=3792 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:27:40.509956 kernel: audit: type=1105 audit(1707506860.496:300): pid=3792 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:27:40.499000 audit[3816]: CRED_ACQ pid=3816 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:27:40.520086 kernel: audit: type=1103 audit(1707506860.499:301): pid=3816 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:27:41.347747 sshd[3792]: pam_unix(sshd:session): session closed for user core Feb 9 19:27:41.349000 audit[3792]: USER_END pid=3792 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:27:41.364087 kernel: audit: type=1106 audit(1707506861.349:302): pid=3792 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:27:41.377424 kernel: audit: type=1104 audit(1707506861.349:303): pid=3792 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:27:41.349000 audit[3792]: CRED_DISP pid=3792 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:27:41.364989 systemd[1]: sshd@7-172.24.4.217:22-172.24.4.1:50354.service: Deactivated successfully. Feb 9 19:27:41.375652 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 19:27:41.375816 systemd-logind[1122]: Session 8 logged out. Waiting for processes to exit. Feb 9 19:27:41.378664 systemd-logind[1122]: Removed session 8. Feb 9 19:27:41.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.24.4.217:22-172.24.4.1:50354 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:46.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.24.4.217:22-172.24.4.1:48824 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:46.353086 systemd[1]: Started sshd@8-172.24.4.217:22-172.24.4.1:48824.service. Feb 9 19:27:46.359126 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:27:46.359198 kernel: audit: type=1130 audit(1707506866.352:305): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.24.4.217:22-172.24.4.1:48824 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:47.558440 env[1135]: time="2024-02-09T19:27:47.558381263Z" level=info msg="StopPodSandbox for \"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\"" Feb 9 19:27:47.603976 env[1135]: time="2024-02-09T19:27:47.603885622Z" level=error msg="StopPodSandbox for \"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\" failed" error="failed to destroy network for sandbox \"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:27:47.604197 kubelet[2109]: E0209 19:27:47.604144 2109 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589" Feb 9 19:27:47.604197 kubelet[2109]: E0209 19:27:47.604189 2109 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589} Feb 9 19:27:47.606520 kubelet[2109]: E0209 19:27:47.604235 2109 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"856ba022-e379-4cd0-87a4-cdfa313ac255\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:27:47.606520 kubelet[2109]: E0209 19:27:47.604275 2109 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"856ba022-e379-4cd0-87a4-cdfa313ac255\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-f8g65" podUID=856ba022-e379-4cd0-87a4-cdfa313ac255 Feb 9 19:27:47.703000 audit[3826]: USER_ACCT pid=3826 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:27:47.706180 sshd[3826]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:47.706674 sshd[3826]: Accepted publickey for core from 172.24.4.1 port 48824 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:27:47.704000 audit[3826]: CRED_ACQ pid=3826 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:27:47.714011 kernel: audit: type=1101 audit(1707506867.703:306): pid=3826 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:27:47.714091 kernel: audit: type=1103 audit(1707506867.704:307): pid=3826 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:27:47.715194 kernel: audit: type=1006 audit(1707506867.704:308): pid=3826 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Feb 9 19:27:47.719462 kernel: audit: type=1300 audit(1707506867.704:308): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcbe705910 a2=3 a3=0 items=0 ppid=1 pid=3826 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:47.704000 audit[3826]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcbe705910 a2=3 a3=0 items=0 ppid=1 pid=3826 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:47.720755 systemd[1]: Started session-9.scope. Feb 9 19:27:47.723097 systemd-logind[1122]: New session 9 of user core. Feb 9 19:27:47.704000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:27:47.726054 kernel: audit: type=1327 audit(1707506867.704:308): proctitle=737368643A20636F7265205B707269765D Feb 9 19:27:47.733000 audit[3826]: USER_START pid=3826 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:27:47.742097 kernel: audit: type=1105 audit(1707506867.733:309): pid=3826 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:27:47.740000 audit[3848]: CRED_ACQ pid=3848 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:27:47.748925 kernel: audit: type=1103 audit(1707506867.740:310): pid=3848 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:27:48.527506 sshd[3826]: pam_unix(sshd:session): session closed for user core Feb 9 19:27:48.529000 audit[3826]: USER_END pid=3826 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:27:48.534111 systemd[1]: sshd@8-172.24.4.217:22-172.24.4.1:48824.service: Deactivated successfully. Feb 9 19:27:48.536046 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 19:27:48.545073 kernel: audit: type=1106 audit(1707506868.529:311): pid=3826 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:27:48.545626 systemd-logind[1122]: Session 9 logged out. Waiting for processes to exit. Feb 9 19:27:48.529000 audit[3826]: CRED_DISP pid=3826 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:27:48.559319 kernel: audit: type=1104 audit(1707506868.529:312): pid=3826 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:27:48.564032 systemd-logind[1122]: Removed session 9. Feb 9 19:27:48.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.24.4.217:22-172.24.4.1:48824 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:49.557820 env[1135]: time="2024-02-09T19:27:49.557778191Z" level=info msg="StopPodSandbox for \"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\"" Feb 9 19:27:49.595715 env[1135]: time="2024-02-09T19:27:49.595658730Z" level=error msg="StopPodSandbox for \"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\" failed" error="failed to destroy network for sandbox \"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:27:49.596132 kubelet[2109]: E0209 19:27:49.596101 2109 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26" Feb 9 19:27:49.596492 kubelet[2109]: E0209 19:27:49.596179 2109 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26} Feb 9 19:27:49.596492 kubelet[2109]: E0209 19:27:49.596225 2109 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d9a58a43-cc9b-49e7-89c7-8d2f444dd31a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:27:49.596492 kubelet[2109]: E0209 19:27:49.596280 2109 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d9a58a43-cc9b-49e7-89c7-8d2f444dd31a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f585564b5-stdvx" podUID=d9a58a43-cc9b-49e7-89c7-8d2f444dd31a Feb 9 19:27:51.558692 env[1135]: time="2024-02-09T19:27:51.558657533Z" level=info msg="StopPodSandbox for \"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\"" Feb 9 19:27:51.608414 env[1135]: time="2024-02-09T19:27:51.608349415Z" level=error msg="StopPodSandbox for \"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\" failed" error="failed to destroy network for sandbox \"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:27:51.609638 kubelet[2109]: E0209 19:27:51.609619 2109 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281" Feb 9 19:27:51.610239 kubelet[2109]: E0209 19:27:51.610199 2109 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281} Feb 9 19:27:51.610392 kubelet[2109]: E0209 19:27:51.610365 2109 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"93fdcc9f-1773-437d-8e12-9052cf2f26e5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:27:51.610476 kubelet[2109]: E0209 19:27:51.610465 2109 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"93fdcc9f-1773-437d-8e12-9052cf2f26e5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-gcw56" podUID=93fdcc9f-1773-437d-8e12-9052cf2f26e5 Feb 9 19:27:53.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.24.4.217:22-172.24.4.1:48840 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:53.534666 systemd[1]: Started sshd@9-172.24.4.217:22-172.24.4.1:48840.service. Feb 9 19:27:53.539284 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:27:53.539481 kernel: audit: type=1130 audit(1707506873.534:314): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.24.4.217:22-172.24.4.1:48840 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:53.565590 env[1135]: time="2024-02-09T19:27:53.565498649Z" level=info msg="StopPodSandbox for \"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\"" Feb 9 19:27:53.640978 env[1135]: time="2024-02-09T19:27:53.640763506Z" level=error msg="StopPodSandbox for \"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\" failed" error="failed to destroy network for sandbox \"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:27:53.641246 kubelet[2109]: E0209 19:27:53.641185 2109 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e" Feb 9 19:27:53.641985 kubelet[2109]: E0209 19:27:53.641269 2109 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e} Feb 9 19:27:53.641985 kubelet[2109]: E0209 19:27:53.641360 2109 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"258b5f6f-f507-494e-8282-83a91907d3f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:27:53.641985 kubelet[2109]: E0209 19:27:53.641445 2109 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"258b5f6f-f507-494e-8282-83a91907d3f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x2vsr" podUID=258b5f6f-f507-494e-8282-83a91907d3f5 Feb 9 19:27:54.862000 audit[3895]: USER_ACCT pid=3895 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:27:54.864135 sshd[3895]: Accepted publickey for core from 172.24.4.1 port 48840 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:27:54.876959 kernel: audit: type=1101 audit(1707506874.862:315): pid=3895 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:27:54.876000 audit[3895]: CRED_ACQ pid=3895 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:27:54.884690 sshd[3895]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:54.903475 kernel: audit: type=1103 audit(1707506874.876:316): pid=3895 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:27:54.903977 kernel: audit: type=1006 audit(1707506874.876:317): pid=3895 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Feb 9 19:27:54.904358 kernel: audit: type=1300 audit(1707506874.876:317): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdda25ba10 a2=3 a3=0 items=0 ppid=1 pid=3895 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:54.876000 audit[3895]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdda25ba10 a2=3 a3=0 items=0 ppid=1 pid=3895 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:54.902613 systemd[1]: Started session-10.scope. Feb 9 19:27:54.904878 systemd-logind[1122]: New session 10 of user core. Feb 9 19:27:54.876000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:27:54.926392 kernel: audit: type=1327 audit(1707506874.876:317): proctitle=737368643A20636F7265205B707269765D Feb 9 19:27:54.924000 audit[3895]: USER_START pid=3895 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:27:54.932000 audit[3919]: CRED_ACQ pid=3919 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:27:54.938826 kernel: audit: type=1105 audit(1707506874.924:318): pid=3895 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:27:54.938933 kernel: audit: type=1103 audit(1707506874.932:319): pid=3919 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:27:55.640577 sshd[3895]: pam_unix(sshd:session): session closed for user core Feb 9 19:27:55.642000 audit[3895]: USER_END pid=3895 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:27:55.660655 kernel: audit: type=1106 audit(1707506875.642:320): pid=3895 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:27:55.660892 kernel: audit: type=1104 audit(1707506875.642:321): pid=3895 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:27:55.642000 audit[3895]: CRED_DISP pid=3895 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:27:55.662955 systemd-logind[1122]: Session 10 logged out. Waiting for processes to exit. Feb 9 19:27:55.667060 systemd[1]: sshd@9-172.24.4.217:22-172.24.4.1:48840.service: Deactivated successfully. Feb 9 19:27:55.668738 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 19:27:55.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.24.4.217:22-172.24.4.1:48840 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:55.676454 systemd-logind[1122]: Removed session 10. Feb 9 19:28:00.563462 env[1135]: time="2024-02-09T19:28:00.563364100Z" level=info msg="StopPodSandbox for \"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\"" Feb 9 19:28:00.638541 env[1135]: time="2024-02-09T19:28:00.638444091Z" level=error msg="StopPodSandbox for \"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\" failed" error="failed to destroy network for sandbox \"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:28:00.639073 kubelet[2109]: E0209 19:28:00.638924 2109 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589" Feb 9 19:28:00.639073 kubelet[2109]: E0209 19:28:00.638967 2109 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589} Feb 9 19:28:00.639073 kubelet[2109]: E0209 19:28:00.639017 2109 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"856ba022-e379-4cd0-87a4-cdfa313ac255\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:28:00.639073 kubelet[2109]: E0209 19:28:00.639054 2109 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"856ba022-e379-4cd0-87a4-cdfa313ac255\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-f8g65" podUID=856ba022-e379-4cd0-87a4-cdfa313ac255 Feb 9 19:28:00.648826 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:28:00.648970 kernel: audit: type=1130 audit(1707506880.642:323): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.24.4.217:22-172.24.4.1:59932 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:00.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.24.4.217:22-172.24.4.1:59932 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:00.642770 systemd[1]: Started sshd@10-172.24.4.217:22-172.24.4.1:59932.service. Feb 9 19:28:01.991982 kernel: audit: type=1101 audit(1707506881.980:324): pid=3949 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:01.980000 audit[3949]: USER_ACCT pid=3949 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:01.992388 sshd[3949]: Accepted publickey for core from 172.24.4.1 port 59932 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:28:01.991000 audit[3949]: CRED_ACQ pid=3949 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:02.001296 sshd[3949]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:28:02.008276 kernel: audit: type=1103 audit(1707506881.991:325): pid=3949 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:02.022444 kernel: audit: type=1006 audit(1707506881.991:326): pid=3949 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Feb 9 19:28:02.020106 systemd[1]: Started session-11.scope. Feb 9 19:28:02.021607 systemd-logind[1122]: New session 11 of user core. Feb 9 19:28:01.991000 audit[3949]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe32c47d0 a2=3 a3=0 items=0 ppid=1 pid=3949 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:02.036033 kernel: audit: type=1300 audit(1707506881.991:326): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe32c47d0 a2=3 a3=0 items=0 ppid=1 pid=3949 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:02.036139 kernel: audit: type=1327 audit(1707506881.991:326): proctitle=737368643A20636F7265205B707269765D Feb 9 19:28:01.991000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:28:02.046000 audit[3949]: USER_START pid=3949 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:02.051000 audit[3952]: CRED_ACQ pid=3952 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:02.056946 kernel: audit: type=1105 audit(1707506882.046:327): pid=3949 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:02.057078 kernel: audit: type=1103 audit(1707506882.051:328): pid=3952 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:02.839318 sshd[3949]: pam_unix(sshd:session): session closed for user core Feb 9 19:28:02.840000 audit[3949]: USER_END pid=3949 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:02.860649 kernel: audit: type=1106 audit(1707506882.840:329): pid=3949 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:02.855672 systemd[1]: sshd@10-172.24.4.217:22-172.24.4.1:59932.service: Deactivated successfully. Feb 9 19:28:02.859543 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 19:28:02.840000 audit[3949]: CRED_DISP pid=3949 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:02.869308 systemd-logind[1122]: Session 11 logged out. Waiting for processes to exit. Feb 9 19:28:02.871691 systemd-logind[1122]: Removed session 11. Feb 9 19:28:02.873518 kernel: audit: type=1104 audit(1707506882.840:330): pid=3949 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:02.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.24.4.217:22-172.24.4.1:59932 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:04.563819 env[1135]: time="2024-02-09T19:28:04.563697023Z" level=info msg="StopPodSandbox for \"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\"" Feb 9 19:28:04.585026 env[1135]: time="2024-02-09T19:28:04.583643823Z" level=info msg="StopPodSandbox for \"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\"" Feb 9 19:28:04.651304 env[1135]: time="2024-02-09T19:28:04.651242912Z" level=error msg="StopPodSandbox for \"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\" failed" error="failed to destroy network for sandbox \"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:28:04.651471 kubelet[2109]: E0209 19:28:04.651439 2109 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26" Feb 9 19:28:04.651750 kubelet[2109]: E0209 19:28:04.651491 2109 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26} Feb 9 19:28:04.651750 kubelet[2109]: E0209 19:28:04.651533 2109 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d9a58a43-cc9b-49e7-89c7-8d2f444dd31a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:28:04.651750 kubelet[2109]: E0209 19:28:04.651570 2109 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d9a58a43-cc9b-49e7-89c7-8d2f444dd31a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f585564b5-stdvx" podUID=d9a58a43-cc9b-49e7-89c7-8d2f444dd31a Feb 9 19:28:04.669106 env[1135]: time="2024-02-09T19:28:04.669051456Z" level=error msg="StopPodSandbox for \"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\" failed" error="failed to destroy network for sandbox \"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:28:04.669557 kubelet[2109]: E0209 19:28:04.669527 2109 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e" Feb 9 19:28:04.669632 kubelet[2109]: E0209 19:28:04.669584 2109 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e} Feb 9 19:28:04.669632 kubelet[2109]: E0209 19:28:04.669627 2109 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"258b5f6f-f507-494e-8282-83a91907d3f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:28:04.669774 kubelet[2109]: E0209 19:28:04.669680 2109 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"258b5f6f-f507-494e-8282-83a91907d3f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x2vsr" podUID=258b5f6f-f507-494e-8282-83a91907d3f5 Feb 9 19:28:05.559320 env[1135]: time="2024-02-09T19:28:05.559249289Z" level=info msg="StopPodSandbox for \"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\"" Feb 9 19:28:05.635820 env[1135]: time="2024-02-09T19:28:05.635709145Z" level=error msg="StopPodSandbox for \"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\" failed" error="failed to destroy network for sandbox \"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:28:05.636746 kubelet[2109]: E0209 19:28:05.636129 2109 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281" Feb 9 19:28:05.636746 kubelet[2109]: E0209 19:28:05.636201 2109 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281} Feb 9 19:28:05.636746 kubelet[2109]: E0209 19:28:05.636297 2109 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"93fdcc9f-1773-437d-8e12-9052cf2f26e5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:28:05.636746 kubelet[2109]: E0209 19:28:05.636382 2109 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"93fdcc9f-1773-437d-8e12-9052cf2f26e5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-gcw56" podUID=93fdcc9f-1773-437d-8e12-9052cf2f26e5 Feb 9 19:28:07.853975 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:28:07.854265 kernel: audit: type=1130 audit(1707506887.845:332): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.24.4.217:22-172.24.4.1:38844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:07.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.24.4.217:22-172.24.4.1:38844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:07.846019 systemd[1]: Started sshd@11-172.24.4.217:22-172.24.4.1:38844.service. Feb 9 19:28:09.195000 audit[4021]: USER_ACCT pid=4021 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:09.212031 sshd[4021]: Accepted publickey for core from 172.24.4.1 port 38844 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:28:09.212968 kernel: audit: type=1101 audit(1707506889.195:333): pid=4021 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:09.211000 audit[4021]: CRED_ACQ pid=4021 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:09.223045 kernel: audit: type=1103 audit(1707506889.211:334): pid=4021 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:09.223470 sshd[4021]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:28:09.211000 audit[4021]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcc3dd1eb0 a2=3 a3=0 items=0 ppid=1 pid=4021 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:09.240969 kernel: audit: type=1006 audit(1707506889.211:335): pid=4021 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Feb 9 19:28:09.241118 kernel: audit: type=1300 audit(1707506889.211:335): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcc3dd1eb0 a2=3 a3=0 items=0 ppid=1 pid=4021 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:09.211000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:28:09.245309 kernel: audit: type=1327 audit(1707506889.211:335): proctitle=737368643A20636F7265205B707269765D Feb 9 19:28:09.252985 systemd-logind[1122]: New session 12 of user core. Feb 9 19:28:09.254181 systemd[1]: Started session-12.scope. Feb 9 19:28:09.263000 audit[4021]: USER_START pid=4021 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:09.270975 kernel: audit: type=1105 audit(1707506889.263:336): pid=4021 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:09.270000 audit[4024]: CRED_ACQ pid=4024 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:09.276994 kernel: audit: type=1103 audit(1707506889.270:337): pid=4024 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:10.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.24.4.217:22-172.24.4.1:38854 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:10.290554 systemd[1]: Started sshd@12-172.24.4.217:22-172.24.4.1:38854.service. Feb 9 19:28:10.300074 kernel: audit: type=1130 audit(1707506890.289:338): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.24.4.217:22-172.24.4.1:38854 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:10.300491 sshd[4021]: pam_unix(sshd:session): session closed for user core Feb 9 19:28:10.302000 audit[4021]: USER_END pid=4021 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:10.315030 kernel: audit: type=1106 audit(1707506890.302:339): pid=4021 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:10.302000 audit[4021]: CRED_DISP pid=4021 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:10.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.24.4.217:22-172.24.4.1:38844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:10.352737 systemd[1]: sshd@11-172.24.4.217:22-172.24.4.1:38844.service: Deactivated successfully. Feb 9 19:28:10.354557 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 19:28:10.358518 systemd-logind[1122]: Session 12 logged out. Waiting for processes to exit. Feb 9 19:28:10.361233 systemd-logind[1122]: Removed session 12. Feb 9 19:28:11.564752 env[1135]: time="2024-02-09T19:28:11.561813688Z" level=info msg="StopPodSandbox for \"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\"" Feb 9 19:28:11.579000 audit[4035]: USER_ACCT pid=4035 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:11.582131 sshd[4035]: Accepted publickey for core from 172.24.4.1 port 38854 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:28:11.582000 audit[4035]: CRED_ACQ pid=4035 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:11.582000 audit[4035]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc6cd0ae70 a2=3 a3=0 items=0 ppid=1 pid=4035 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:11.582000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:28:11.584341 sshd[4035]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:28:11.597954 systemd[1]: Started session-13.scope. Feb 9 19:28:11.600150 systemd-logind[1122]: New session 13 of user core. Feb 9 19:28:11.618000 audit[4035]: USER_START pid=4035 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:11.621000 audit[4057]: CRED_ACQ pid=4057 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:11.666107 env[1135]: time="2024-02-09T19:28:11.666018495Z" level=error msg="StopPodSandbox for \"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\" failed" error="failed to destroy network for sandbox \"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:28:11.666600 kubelet[2109]: E0209 19:28:11.666532 2109 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589" Feb 9 19:28:11.666968 kubelet[2109]: E0209 19:28:11.666654 2109 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589} Feb 9 19:28:11.666968 kubelet[2109]: E0209 19:28:11.666821 2109 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"856ba022-e379-4cd0-87a4-cdfa313ac255\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:28:11.667163 kubelet[2109]: E0209 19:28:11.667106 2109 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"856ba022-e379-4cd0-87a4-cdfa313ac255\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-f8g65" podUID=856ba022-e379-4cd0-87a4-cdfa313ac255 Feb 9 19:28:13.844608 systemd[1]: Started sshd@13-172.24.4.217:22-172.24.4.1:38856.service. Feb 9 19:28:13.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.24.4.217:22-172.24.4.1:38856 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:13.853307 kernel: kauditd_printk_skb: 9 callbacks suppressed Feb 9 19:28:13.853453 kernel: audit: type=1130 audit(1707506893.844:347): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.24.4.217:22-172.24.4.1:38856 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:13.898724 sshd[4035]: pam_unix(sshd:session): session closed for user core Feb 9 19:28:13.899000 audit[4035]: USER_END pid=4035 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:13.914788 kernel: audit: type=1106 audit(1707506893.899:348): pid=4035 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:13.914960 systemd[1]: sshd@12-172.24.4.217:22-172.24.4.1:38854.service: Deactivated successfully. Feb 9 19:28:13.917991 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 19:28:13.900000 audit[4035]: CRED_DISP pid=4035 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:13.931009 kernel: audit: type=1104 audit(1707506893.900:349): pid=4035 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:13.930888 systemd-logind[1122]: Session 13 logged out. Waiting for processes to exit. Feb 9 19:28:13.914000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.24.4.217:22-172.24.4.1:38854 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:13.943078 kernel: audit: type=1131 audit(1707506893.914:350): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.24.4.217:22-172.24.4.1:38854 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:13.943675 systemd-logind[1122]: Removed session 13. Feb 9 19:28:15.348000 audit[4067]: USER_ACCT pid=4067 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:15.350219 sshd[4067]: Accepted publickey for core from 172.24.4.1 port 38856 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:28:15.365735 kernel: audit: type=1101 audit(1707506895.348:351): pid=4067 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:15.365174 sshd[4067]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:28:15.363000 audit[4067]: CRED_ACQ pid=4067 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:15.388065 kernel: audit: type=1103 audit(1707506895.363:352): pid=4067 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:15.388203 kernel: audit: type=1006 audit(1707506895.363:353): pid=4067 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Feb 9 19:28:15.363000 audit[4067]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff340831f0 a2=3 a3=0 items=0 ppid=1 pid=4067 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:15.399844 kernel: audit: type=1300 audit(1707506895.363:353): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff340831f0 a2=3 a3=0 items=0 ppid=1 pid=4067 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:15.400463 kernel: audit: type=1327 audit(1707506895.363:353): proctitle=737368643A20636F7265205B707269765D Feb 9 19:28:15.363000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:28:15.413420 systemd[1]: Started session-14.scope. Feb 9 19:28:15.413549 systemd-logind[1122]: New session 14 of user core. Feb 9 19:28:15.436312 kernel: audit: type=1105 audit(1707506895.429:354): pid=4067 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:15.429000 audit[4067]: USER_START pid=4067 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:15.432000 audit[4072]: CRED_ACQ pid=4072 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:16.530677 sshd[4067]: pam_unix(sshd:session): session closed for user core Feb 9 19:28:16.539000 audit[4067]: USER_END pid=4067 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:16.539000 audit[4067]: CRED_DISP pid=4067 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:16.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.24.4.217:22-172.24.4.1:38856 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:16.547036 systemd[1]: sshd@13-172.24.4.217:22-172.24.4.1:38856.service: Deactivated successfully. Feb 9 19:28:16.569722 env[1135]: time="2024-02-09T19:28:16.564174473Z" level=info msg="StopPodSandbox for \"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\"" Feb 9 19:28:16.554196 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 19:28:16.555221 systemd-logind[1122]: Session 14 logged out. Waiting for processes to exit. Feb 9 19:28:16.568799 systemd-logind[1122]: Removed session 14. Feb 9 19:28:16.630163 env[1135]: time="2024-02-09T19:28:16.630098608Z" level=error msg="StopPodSandbox for \"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\" failed" error="failed to destroy network for sandbox \"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:28:16.630629 kubelet[2109]: E0209 19:28:16.630447 2109 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26" Feb 9 19:28:16.630629 kubelet[2109]: E0209 19:28:16.630509 2109 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26} Feb 9 19:28:16.630629 kubelet[2109]: E0209 19:28:16.630553 2109 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d9a58a43-cc9b-49e7-89c7-8d2f444dd31a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:28:16.630629 kubelet[2109]: E0209 19:28:16.630610 2109 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d9a58a43-cc9b-49e7-89c7-8d2f444dd31a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f585564b5-stdvx" podUID=d9a58a43-cc9b-49e7-89c7-8d2f444dd31a Feb 9 19:28:19.559390 env[1135]: time="2024-02-09T19:28:19.558531726Z" level=info msg="StopPodSandbox for \"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\"" Feb 9 19:28:19.666575 env[1135]: time="2024-02-09T19:28:19.666499942Z" level=error msg="StopPodSandbox for \"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\" failed" error="failed to destroy network for sandbox \"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:28:19.667025 kubelet[2109]: E0209 19:28:19.666859 2109 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e" Feb 9 19:28:19.667025 kubelet[2109]: E0209 19:28:19.666919 2109 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e} Feb 9 19:28:19.667025 kubelet[2109]: E0209 19:28:19.666966 2109 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"258b5f6f-f507-494e-8282-83a91907d3f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:28:19.667025 kubelet[2109]: E0209 19:28:19.667005 2109 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"258b5f6f-f507-494e-8282-83a91907d3f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x2vsr" podUID=258b5f6f-f507-494e-8282-83a91907d3f5 Feb 9 19:28:20.559165 env[1135]: time="2024-02-09T19:28:20.559113033Z" level=info msg="StopPodSandbox for \"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\"" Feb 9 19:28:20.690277 env[1135]: time="2024-02-09T19:28:20.690223896Z" level=error msg="StopPodSandbox for \"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\" failed" error="failed to destroy network for sandbox \"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:28:20.712017 kubelet[2109]: E0209 19:28:20.694095 2109 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281" Feb 9 19:28:20.712017 kubelet[2109]: E0209 19:28:20.694154 2109 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281} Feb 9 19:28:20.712017 kubelet[2109]: E0209 19:28:20.694201 2109 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"93fdcc9f-1773-437d-8e12-9052cf2f26e5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:28:20.712017 kubelet[2109]: E0209 19:28:20.694256 2109 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"93fdcc9f-1773-437d-8e12-9052cf2f26e5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-gcw56" podUID=93fdcc9f-1773-437d-8e12-9052cf2f26e5 Feb 9 19:28:21.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.24.4.217:22-172.24.4.1:53632 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:21.536712 kernel: kauditd_printk_skb: 4 callbacks suppressed Feb 9 19:28:21.536802 kernel: audit: type=1130 audit(1707506901.523:359): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.24.4.217:22-172.24.4.1:53632 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:21.524503 systemd[1]: Started sshd@14-172.24.4.217:22-172.24.4.1:53632.service. Feb 9 19:28:22.502949 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2665773640.mount: Deactivated successfully. Feb 9 19:28:22.662418 env[1135]: time="2024-02-09T19:28:22.662260949Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:22.666070 env[1135]: time="2024-02-09T19:28:22.665996762Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1843802b91be8ff1c1d35ee08461ebe909e7a2199e59396f69886439a372312c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:22.671235 env[1135]: time="2024-02-09T19:28:22.671200742Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:22.673577 env[1135]: time="2024-02-09T19:28:22.673550735Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:a45dffb21a0e9ca8962f36359a2ab776beeecd93843543c2fa1745d7bbb0f754,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:22.675124 env[1135]: time="2024-02-09T19:28:22.675006477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\" returns image reference \"sha256:1843802b91be8ff1c1d35ee08461ebe909e7a2199e59396f69886439a372312c\"" Feb 9 19:28:22.703102 env[1135]: time="2024-02-09T19:28:22.703037753Z" level=info msg="CreateContainer within sandbox \"d2844f52eaae4956fdef06e8b743cab33bed8045e416744facfcf6ece1333c5e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 9 19:28:22.736481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3243128197.mount: Deactivated successfully. Feb 9 19:28:22.751290 env[1135]: time="2024-02-09T19:28:22.751203900Z" level=info msg="CreateContainer within sandbox \"d2844f52eaae4956fdef06e8b743cab33bed8045e416744facfcf6ece1333c5e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7dab9bb23639f8dd1be0275750646ed8b5e56b941d0d53153d98176fac0b94f4\"" Feb 9 19:28:22.754934 env[1135]: time="2024-02-09T19:28:22.754789011Z" level=info msg="StartContainer for \"7dab9bb23639f8dd1be0275750646ed8b5e56b941d0d53153d98176fac0b94f4\"" Feb 9 19:28:22.873884 env[1135]: time="2024-02-09T19:28:22.873837632Z" level=info msg="StartContainer for \"7dab9bb23639f8dd1be0275750646ed8b5e56b941d0d53153d98176fac0b94f4\" returns successfully" Feb 9 19:28:22.919409 sshd[4146]: Accepted publickey for core from 172.24.4.1 port 53632 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:28:22.918000 audit[4146]: USER_ACCT pid=4146 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:22.925475 kernel: audit: type=1101 audit(1707506902.918:360): pid=4146 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:22.926636 kernel: audit: type=1103 audit(1707506902.923:361): pid=4146 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:22.923000 audit[4146]: CRED_ACQ pid=4146 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:22.926014 sshd[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:28:22.934985 kernel: audit: type=1006 audit(1707506902.923:362): pid=4146 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Feb 9 19:28:22.923000 audit[4146]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd277ee680 a2=3 a3=0 items=0 ppid=1 pid=4146 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:22.923000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:28:22.944664 kernel: audit: type=1300 audit(1707506902.923:362): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd277ee680 a2=3 a3=0 items=0 ppid=1 pid=4146 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:22.944747 kernel: audit: type=1327 audit(1707506902.923:362): proctitle=737368643A20636F7265205B707269765D Feb 9 19:28:22.945602 systemd-logind[1122]: New session 15 of user core. Feb 9 19:28:22.946500 systemd[1]: Started session-15.scope. Feb 9 19:28:22.960000 audit[4146]: USER_START pid=4146 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:22.972578 kernel: audit: type=1105 audit(1707506902.960:363): pid=4146 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:22.970000 audit[4190]: CRED_ACQ pid=4190 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:22.979918 kernel: audit: type=1103 audit(1707506902.970:364): pid=4190 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:23.044951 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 9 19:28:23.045136 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 9 19:28:23.558496 env[1135]: time="2024-02-09T19:28:23.558295156Z" level=info msg="StopPodSandbox for \"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\"" Feb 9 19:28:23.950936 kubelet[2109]: I0209 19:28:23.950637 2109 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-c7vdt" podStartSLOduration=-9.22337195091956e+09 pod.CreationTimestamp="2024-02-09 19:26:58 +0000 UTC" firstStartedPulling="2024-02-09 19:26:59.699513601 +0000 UTC m=+35.390923743" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:28:23.087993245 +0000 UTC m=+118.779403407" watchObservedRunningTime="2024-02-09 19:28:23.935217029 +0000 UTC m=+119.626627191" Feb 9 19:28:24.792592 sshd[4146]: pam_unix(sshd:session): session closed for user core Feb 9 19:28:24.807597 kernel: audit: type=1106 audit(1707506904.793:365): pid=4146 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:24.807672 kernel: audit: type=1104 audit(1707506904.793:366): pid=4146 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:24.793000 audit[4146]: USER_END pid=4146 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:24.793000 audit[4146]: CRED_DISP pid=4146 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:24.800875 systemd[1]: sshd@14-172.24.4.217:22-172.24.4.1:53632.service: Deactivated successfully. Feb 9 19:28:24.802169 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 19:28:24.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.24.4.217:22-172.24.4.1:53632 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:24.802554 systemd-logind[1122]: Session 15 logged out. Waiting for processes to exit. Feb 9 19:28:24.809204 systemd-logind[1122]: Removed session 15. Feb 9 19:28:25.101000 audit[4359]: AVC avc: denied { write } for pid=4359 comm="tee" name="fd" dev="proc" ino=30834 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:28:25.101000 audit[4359]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc59b0c95e a2=241 a3=1b6 items=1 ppid=4327 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:25.101000 audit: CWD cwd="/etc/service/enabled/bird/log" Feb 9 19:28:25.101000 audit: PATH item=0 name="/dev/fd/63" inode=30822 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:25.101000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:28:25.108000 audit[4374]: AVC avc: denied { write } for pid=4374 comm="tee" name="fd" dev="proc" ino=30267 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:28:25.108000 audit[4374]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffbcf8894d a2=241 a3=1b6 items=1 ppid=4330 pid=4374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:25.108000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Feb 9 19:28:25.108000 audit: PATH item=0 name="/dev/fd/63" inode=30264 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:25.108000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:28:25.102000 audit[4361]: AVC avc: denied { write } for pid=4361 comm="tee" name="fd" dev="proc" ino=30838 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:28:25.102000 audit[4361]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff3c72b95d a2=241 a3=1b6 items=1 ppid=4328 pid=4361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:25.102000 audit: CWD cwd="/etc/service/enabled/felix/log" Feb 9 19:28:25.102000 audit: PATH item=0 name="/dev/fd/63" inode=30823 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:25.102000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:28:25.124000 audit[4369]: AVC avc: denied { write } for pid=4369 comm="tee" name="fd" dev="proc" ino=30282 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:28:25.125000 audit[4372]: AVC avc: denied { write } for pid=4372 comm="tee" name="fd" dev="proc" ino=30283 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:28:25.125000 audit[4372]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe3228695d a2=241 a3=1b6 items=1 ppid=4325 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:25.125000 audit: CWD cwd="/etc/service/enabled/bird6/log" Feb 9 19:28:25.125000 audit: PATH item=0 name="/dev/fd/63" inode=30261 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:25.125000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:28:25.126000 audit[4380]: AVC avc: denied { write } for pid=4380 comm="tee" name="fd" dev="proc" ino=30855 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:28:25.126000 audit[4380]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff17bd794e a2=241 a3=1b6 items=1 ppid=4333 pid=4380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:25.126000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Feb 9 19:28:25.126000 audit: PATH item=0 name="/dev/fd/63" inode=30273 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:25.126000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:28:25.124000 audit[4369]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc1b40795d a2=241 a3=1b6 items=1 ppid=4329 pid=4369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:25.124000 audit: CWD cwd="/etc/service/enabled/confd/log" Feb 9 19:28:25.124000 audit: PATH item=0 name="/dev/fd/63" inode=30260 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:25.124000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:28:25.179000 audit[4412]: AVC avc: denied { write } for pid=4412 comm="tee" name="fd" dev="proc" ino=30882 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:28:25.179000 audit[4412]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffccf41d95f a2=241 a3=1b6 items=1 ppid=4332 pid=4412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:25.179000 audit: CWD cwd="/etc/service/enabled/cni/log" Feb 9 19:28:25.179000 audit: PATH item=0 name="/dev/fd/63" inode=30289 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:28:25.179000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:28:25.105935 systemd[1]: run-containerd-runc-k8s.io-7dab9bb23639f8dd1be0275750646ed8b5e56b941d0d53153d98176fac0b94f4-runc.NjDWLe.mount: Deactivated successfully. Feb 9 19:28:26.397790 env[1135]: 2024-02-09 19:28:23.935 [INFO][4243] k8s.go 578: Cleaning up netns ContainerID="72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589" Feb 9 19:28:26.397790 env[1135]: 2024-02-09 19:28:23.936 [INFO][4243] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589" iface="eth0" netns="/var/run/netns/cni-0f2995ae-11e1-0e8b-740e-254f422f4aba" Feb 9 19:28:26.397790 env[1135]: 2024-02-09 19:28:23.937 [INFO][4243] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589" iface="eth0" netns="/var/run/netns/cni-0f2995ae-11e1-0e8b-740e-254f422f4aba" Feb 9 19:28:26.397790 env[1135]: 2024-02-09 19:28:23.938 [INFO][4243] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589" iface="eth0" netns="/var/run/netns/cni-0f2995ae-11e1-0e8b-740e-254f422f4aba" Feb 9 19:28:26.397790 env[1135]: 2024-02-09 19:28:23.939 [INFO][4243] k8s.go 585: Releasing IP address(es) ContainerID="72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589" Feb 9 19:28:26.397790 env[1135]: 2024-02-09 19:28:23.939 [INFO][4243] utils.go 188: Calico CNI releasing IP address ContainerID="72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589" Feb 9 19:28:26.397790 env[1135]: 2024-02-09 19:28:26.362 [INFO][4252] ipam_plugin.go 415: Releasing address using handleID ContainerID="72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589" HandleID="k8s-pod-network.72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--f8g65-eth0" Feb 9 19:28:26.397790 env[1135]: 2024-02-09 19:28:26.367 [INFO][4252] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:28:26.397790 env[1135]: 2024-02-09 19:28:26.367 [INFO][4252] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:28:26.397790 env[1135]: 2024-02-09 19:28:26.390 [WARNING][4252] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589" HandleID="k8s-pod-network.72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--f8g65-eth0" Feb 9 19:28:26.397790 env[1135]: 2024-02-09 19:28:26.390 [INFO][4252] ipam_plugin.go 443: Releasing address using workloadID ContainerID="72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589" HandleID="k8s-pod-network.72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--f8g65-eth0" Feb 9 19:28:26.397790 env[1135]: 2024-02-09 19:28:26.393 [INFO][4252] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:28:26.397790 env[1135]: 2024-02-09 19:28:26.394 [INFO][4243] k8s.go 591: Teardown processing complete. ContainerID="72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589" Feb 9 19:28:26.404116 env[1135]: time="2024-02-09T19:28:26.402004583Z" level=info msg="TearDown network for sandbox \"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\" successfully" Feb 9 19:28:26.404116 env[1135]: time="2024-02-09T19:28:26.402040671Z" level=info msg="StopPodSandbox for \"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\" returns successfully" Feb 9 19:28:26.404116 env[1135]: time="2024-02-09T19:28:26.402561488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-f8g65,Uid:856ba022-e379-4cd0-87a4-cdfa313ac255,Namespace:kube-system,Attempt:1,}" Feb 9 19:28:26.401366 systemd[1]: run-netns-cni\x2d0f2995ae\x2d11e1\x2d0e8b\x2d740e\x2d254f422f4aba.mount: Deactivated successfully. Feb 9 19:28:26.491000 audit[4495]: AVC avc: denied { bpf } for pid=4495 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:26.491000 audit[4495]: AVC avc: denied { bpf } for pid=4495 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:26.491000 audit[4495]: AVC avc: denied { perfmon } for pid=4495 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:26.491000 audit[4495]: AVC avc: denied { perfmon } for pid=4495 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:26.491000 audit[4495]: AVC avc: denied { perfmon } for pid=4495 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:26.491000 audit[4495]: AVC avc: denied { perfmon } for pid=4495 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:26.491000 audit[4495]: AVC avc: denied { perfmon } for pid=4495 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:26.491000 audit[4495]: AVC avc: denied { bpf } for pid=4495 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:26.491000 audit[4495]: AVC avc: denied { bpf } for pid=4495 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:26.491000 audit: BPF prog-id=10 op=LOAD Feb 9 19:28:26.491000 audit[4495]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffffe768ff0 a2=70 a3=7f243035b000 items=0 ppid=4364 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:26.491000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:28:26.493000 audit: BPF prog-id=10 op=UNLOAD Feb 9 19:28:26.493000 audit[4495]: AVC avc: denied { bpf } for pid=4495 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:26.493000 audit[4495]: AVC avc: denied { bpf } for pid=4495 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:26.493000 audit[4495]: AVC avc: denied { perfmon } for pid=4495 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:26.493000 audit[4495]: AVC avc: denied { perfmon } for pid=4495 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:26.493000 audit[4495]: AVC avc: denied { perfmon } for pid=4495 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:26.493000 audit[4495]: AVC avc: denied { perfmon } for pid=4495 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:26.493000 audit[4495]: AVC avc: denied { perfmon } for pid=4495 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:26.493000 audit[4495]: AVC avc: denied { bpf } for pid=4495 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:26.493000 audit[4495]: AVC avc: denied { bpf } for pid=4495 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:26.493000 audit: BPF prog-id=11 op=LOAD Feb 9 19:28:26.493000 audit[4495]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffffe768ff0 a2=70 a3=6e items=0 ppid=4364 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:26.493000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:28:26.493000 audit: BPF prog-id=11 op=UNLOAD Feb 9 19:28:26.493000 audit[4495]: AVC avc: denied { perfmon } for pid=4495 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:26.493000 audit[4495]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffffe768fa0 a2=70 a3=470860 items=0 ppid=4364 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:26.493000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:28:26.493000 audit[4495]: AVC avc: denied { bpf } for pid=4495 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:26.493000 audit[4495]: AVC avc: denied { bpf } for pid=4495 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:26.493000 audit[4495]: AVC avc: denied { perfmon } for pid=4495 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:26.493000 audit[4495]: AVC avc: denied { perfmon } for pid=4495 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:26.493000 audit[4495]: AVC avc: denied { perfmon } for pid=4495 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:26.493000 audit[4495]: AVC avc: denied { perfmon } for pid=4495 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:26.493000 audit[4495]: AVC avc: denied { perfmon } for pid=4495 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:26.493000 audit[4495]: AVC avc: denied { bpf } for pid=4495 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:26.493000 audit[4495]: AVC avc: denied { bpf } for pid=4495 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:26.493000 audit: BPF prog-id=12 op=LOAD Feb 9 19:28:26.493000 audit[4495]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffffe768f80 a2=70 a3=7ffffe768ff0 items=0 ppid=4364 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:26.493000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:28:26.493000 audit: BPF prog-id=12 op=UNLOAD Feb 9 19:28:26.493000 audit[4495]: AVC avc: denied { bpf } for pid=4495 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:26.493000 audit[4495]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffffe769060 a2=70 a3=0 items=0 ppid=4364 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:26.493000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:28:26.493000 audit[4495]: AVC avc: denied { bpf } for pid=4495 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:26.493000 audit[4495]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffffe769050 a2=70 a3=0 items=0 ppid=4364 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:26.493000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:28:26.493000 audit[4495]: AVC avc: denied { bpf } for pid=4495 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:26.493000 audit[4495]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7ffffe769090 a2=70 a3=fe00 items=0 ppid=4364 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:26.493000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:28:26.495000 audit[4495]: AVC avc: denied { bpf } for pid=4495 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:26.495000 audit[4495]: AVC avc: denied { bpf } for pid=4495 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:26.495000 audit[4495]: AVC avc: denied { bpf } for pid=4495 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:26.495000 audit[4495]: AVC avc: denied { perfmon } for pid=4495 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:26.495000 audit[4495]: AVC avc: denied { perfmon } for pid=4495 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:26.495000 audit[4495]: AVC avc: denied { perfmon } for pid=4495 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:26.495000 audit[4495]: AVC avc: denied { perfmon } for pid=4495 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:26.495000 audit[4495]: AVC avc: denied { perfmon } for pid=4495 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:26.495000 audit[4495]: AVC avc: denied { bpf } for pid=4495 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:26.495000 audit[4495]: AVC avc: denied { bpf } for pid=4495 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:26.495000 audit: BPF prog-id=13 op=LOAD Feb 9 19:28:26.495000 audit[4495]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffffe768fb0 a2=70 a3=ffffffff items=0 ppid=4364 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:26.495000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:28:26.502000 audit[4504]: AVC avc: denied { bpf } for pid=4504 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:26.502000 audit[4504]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff8ae183e0 a2=70 a3=ffff items=0 ppid=4364 pid=4504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:26.502000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 9 19:28:26.502000 audit[4504]: AVC avc: denied { bpf } for pid=4504 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:28:26.502000 audit[4504]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff8ae182b0 a2=70 a3=3 items=0 ppid=4364 pid=4504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:26.502000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 9 19:28:26.509000 audit: BPF prog-id=13 op=UNLOAD Feb 9 19:28:26.605000 audit[4532]: NETFILTER_CFG table=mangle:119 family=2 entries=19 op=nft_register_chain pid=4532 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:28:26.607600 kernel: kauditd_printk_skb: 107 callbacks suppressed Feb 9 19:28:26.607672 kernel: audit: type=1325 audit(1707506906.605:389): table=mangle:119 family=2 entries=19 op=nft_register_chain pid=4532 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:28:26.605000 audit[4532]: SYSCALL arch=c000003e syscall=46 success=yes exit=6800 a0=3 a1=7ffd4c096bc0 a2=0 a3=7ffd4c096bac items=0 ppid=4364 pid=4532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:26.620404 kernel: audit: type=1300 audit(1707506906.605:389): arch=c000003e syscall=46 success=yes exit=6800 a0=3 a1=7ffd4c096bc0 a2=0 a3=7ffd4c096bac items=0 ppid=4364 pid=4532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:26.605000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:28:26.625918 kernel: audit: type=1327 audit(1707506906.605:389): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:28:26.627000 audit[4533]: NETFILTER_CFG table=raw:120 family=2 entries=19 op=nft_register_chain pid=4533 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:28:26.631921 kernel: audit: type=1325 audit(1707506906.627:390): table=raw:120 family=2 entries=19 op=nft_register_chain pid=4533 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:28:26.627000 audit[4533]: SYSCALL arch=c000003e syscall=46 success=yes exit=6132 a0=3 a1=7ffd9a43a310 a2=0 a3=0 items=0 ppid=4364 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:26.640977 kernel: audit: type=1300 audit(1707506906.627:390): arch=c000003e syscall=46 success=yes exit=6132 a0=3 a1=7ffd9a43a310 a2=0 a3=0 items=0 ppid=4364 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:26.627000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:28:26.646008 kernel: audit: type=1327 audit(1707506906.627:390): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:28:26.653000 audit[4538]: NETFILTER_CFG table=nat:121 family=2 entries=16 op=nft_register_chain pid=4538 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:28:26.660921 kernel: audit: type=1325 audit(1707506906.653:391): table=nat:121 family=2 entries=16 op=nft_register_chain pid=4538 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:28:26.653000 audit[4538]: SYSCALL arch=c000003e syscall=46 success=yes exit=5188 a0=3 a1=7ffc79075ee0 a2=0 a3=7ffc79075ecc items=0 ppid=4364 pid=4538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:26.672282 kernel: audit: type=1300 audit(1707506906.653:391): arch=c000003e syscall=46 success=yes exit=5188 a0=3 a1=7ffc79075ee0 a2=0 a3=7ffc79075ecc items=0 ppid=4364 pid=4538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:26.672401 kernel: audit: type=1327 audit(1707506906.653:391): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:28:26.653000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:28:26.655000 audit[4536]: NETFILTER_CFG table=filter:122 family=2 entries=39 op=nft_register_chain pid=4536 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:28:26.679012 kernel: audit: type=1325 audit(1707506906.655:392): table=filter:122 family=2 entries=39 op=nft_register_chain pid=4536 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:28:26.655000 audit[4536]: SYSCALL arch=c000003e syscall=46 success=yes exit=18472 a0=3 a1=7ffd2e6e0330 a2=0 a3=0 items=0 ppid=4364 pid=4536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:26.655000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:28:26.706974 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali91d2bc7512c: link becomes ready Feb 9 19:28:26.710159 systemd-networkd[1018]: cali91d2bc7512c: Link UP Feb 9 19:28:26.711750 systemd-networkd[1018]: cali91d2bc7512c: Gained carrier Feb 9 19:28:26.728678 env[1135]: 2024-02-09 19:28:26.504 [INFO][4474] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--f8g65-eth0 coredns-787d4945fb- kube-system 856ba022-e379-4cd0-87a4-cdfa313ac255 1024 0 2024-02-09 19:26:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:787d4945fb projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510-3-2-b-76a749f546.novalocal coredns-787d4945fb-f8g65 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali91d2bc7512c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ecdfffdccba80dba34604ec3d325a76a280ecbf1d18b1b390ebfc1dba6cc518d" Namespace="kube-system" Pod="coredns-787d4945fb-f8g65" WorkloadEndpoint="ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--f8g65-" Feb 9 19:28:26.728678 env[1135]: 2024-02-09 19:28:26.506 [INFO][4474] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="ecdfffdccba80dba34604ec3d325a76a280ecbf1d18b1b390ebfc1dba6cc518d" Namespace="kube-system" Pod="coredns-787d4945fb-f8g65" WorkloadEndpoint="ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--f8g65-eth0" Feb 9 19:28:26.728678 env[1135]: 2024-02-09 19:28:26.568 [INFO][4513] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ecdfffdccba80dba34604ec3d325a76a280ecbf1d18b1b390ebfc1dba6cc518d" HandleID="k8s-pod-network.ecdfffdccba80dba34604ec3d325a76a280ecbf1d18b1b390ebfc1dba6cc518d" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--f8g65-eth0" Feb 9 19:28:26.728678 env[1135]: 2024-02-09 19:28:26.592 [INFO][4513] ipam_plugin.go 268: Auto assigning IP ContainerID="ecdfffdccba80dba34604ec3d325a76a280ecbf1d18b1b390ebfc1dba6cc518d" HandleID="k8s-pod-network.ecdfffdccba80dba34604ec3d325a76a280ecbf1d18b1b390ebfc1dba6cc518d" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--f8g65-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000296e80), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510-3-2-b-76a749f546.novalocal", "pod":"coredns-787d4945fb-f8g65", "timestamp":"2024-02-09 19:28:26.568934872 +0000 UTC"}, Hostname:"ci-3510-3-2-b-76a749f546.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:28:26.728678 env[1135]: 2024-02-09 19:28:26.593 [INFO][4513] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:28:26.728678 env[1135]: 2024-02-09 19:28:26.593 [INFO][4513] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:28:26.728678 env[1135]: 2024-02-09 19:28:26.593 [INFO][4513] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-2-b-76a749f546.novalocal' Feb 9 19:28:26.728678 env[1135]: 2024-02-09 19:28:26.601 [INFO][4513] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ecdfffdccba80dba34604ec3d325a76a280ecbf1d18b1b390ebfc1dba6cc518d" host="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:28:26.728678 env[1135]: 2024-02-09 19:28:26.626 [INFO][4513] ipam.go 372: Looking up existing affinities for host host="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:28:26.728678 env[1135]: 2024-02-09 19:28:26.645 [INFO][4513] ipam.go 489: Trying affinity for 192.168.2.64/26 host="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:28:26.728678 env[1135]: 2024-02-09 19:28:26.655 [INFO][4513] ipam.go 155: Attempting to load block cidr=192.168.2.64/26 host="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:28:26.728678 env[1135]: 2024-02-09 19:28:26.673 [INFO][4513] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.2.64/26 host="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:28:26.728678 env[1135]: 2024-02-09 19:28:26.673 [INFO][4513] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.2.64/26 handle="k8s-pod-network.ecdfffdccba80dba34604ec3d325a76a280ecbf1d18b1b390ebfc1dba6cc518d" host="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:28:26.728678 env[1135]: 2024-02-09 19:28:26.680 [INFO][4513] ipam.go 1682: Creating new handle: k8s-pod-network.ecdfffdccba80dba34604ec3d325a76a280ecbf1d18b1b390ebfc1dba6cc518d Feb 9 19:28:26.728678 env[1135]: 2024-02-09 19:28:26.684 [INFO][4513] ipam.go 1203: Writing block in order to claim IPs block=192.168.2.64/26 handle="k8s-pod-network.ecdfffdccba80dba34604ec3d325a76a280ecbf1d18b1b390ebfc1dba6cc518d" host="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:28:26.728678 env[1135]: 2024-02-09 19:28:26.692 [INFO][4513] ipam.go 1216: Successfully claimed IPs: [192.168.2.65/26] block=192.168.2.64/26 handle="k8s-pod-network.ecdfffdccba80dba34604ec3d325a76a280ecbf1d18b1b390ebfc1dba6cc518d" host="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:28:26.728678 env[1135]: 2024-02-09 19:28:26.692 [INFO][4513] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.2.65/26] handle="k8s-pod-network.ecdfffdccba80dba34604ec3d325a76a280ecbf1d18b1b390ebfc1dba6cc518d" host="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:28:26.728678 env[1135]: 2024-02-09 19:28:26.692 [INFO][4513] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:28:26.728678 env[1135]: 2024-02-09 19:28:26.692 [INFO][4513] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.2.65/26] IPv6=[] ContainerID="ecdfffdccba80dba34604ec3d325a76a280ecbf1d18b1b390ebfc1dba6cc518d" HandleID="k8s-pod-network.ecdfffdccba80dba34604ec3d325a76a280ecbf1d18b1b390ebfc1dba6cc518d" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--f8g65-eth0" Feb 9 19:28:26.730283 env[1135]: 2024-02-09 19:28:26.698 [INFO][4474] k8s.go 385: Populated endpoint ContainerID="ecdfffdccba80dba34604ec3d325a76a280ecbf1d18b1b390ebfc1dba6cc518d" Namespace="kube-system" Pod="coredns-787d4945fb-f8g65" WorkloadEndpoint="ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--f8g65-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--f8g65-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"856ba022-e379-4cd0-87a4-cdfa313ac255", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 26, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-b-76a749f546.novalocal", ContainerID:"", Pod:"coredns-787d4945fb-f8g65", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali91d2bc7512c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:28:26.730283 env[1135]: 2024-02-09 19:28:26.698 [INFO][4474] k8s.go 386: Calico CNI using IPs: [192.168.2.65/32] ContainerID="ecdfffdccba80dba34604ec3d325a76a280ecbf1d18b1b390ebfc1dba6cc518d" Namespace="kube-system" Pod="coredns-787d4945fb-f8g65" WorkloadEndpoint="ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--f8g65-eth0" Feb 9 19:28:26.730283 env[1135]: 2024-02-09 19:28:26.698 [INFO][4474] dataplane_linux.go 68: Setting the host side veth name to cali91d2bc7512c ContainerID="ecdfffdccba80dba34604ec3d325a76a280ecbf1d18b1b390ebfc1dba6cc518d" Namespace="kube-system" Pod="coredns-787d4945fb-f8g65" WorkloadEndpoint="ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--f8g65-eth0" Feb 9 19:28:26.730283 env[1135]: 2024-02-09 19:28:26.706 [INFO][4474] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ecdfffdccba80dba34604ec3d325a76a280ecbf1d18b1b390ebfc1dba6cc518d" Namespace="kube-system" Pod="coredns-787d4945fb-f8g65" WorkloadEndpoint="ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--f8g65-eth0" Feb 9 19:28:26.730283 env[1135]: 2024-02-09 19:28:26.706 [INFO][4474] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="ecdfffdccba80dba34604ec3d325a76a280ecbf1d18b1b390ebfc1dba6cc518d" Namespace="kube-system" Pod="coredns-787d4945fb-f8g65" WorkloadEndpoint="ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--f8g65-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--f8g65-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"856ba022-e379-4cd0-87a4-cdfa313ac255", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 26, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-b-76a749f546.novalocal", ContainerID:"ecdfffdccba80dba34604ec3d325a76a280ecbf1d18b1b390ebfc1dba6cc518d", Pod:"coredns-787d4945fb-f8g65", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali91d2bc7512c", MAC:"12:91:e8:6e:ee:9a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:28:26.730283 env[1135]: 2024-02-09 19:28:26.723 [INFO][4474] k8s.go 491: Wrote updated endpoint to datastore ContainerID="ecdfffdccba80dba34604ec3d325a76a280ecbf1d18b1b390ebfc1dba6cc518d" Namespace="kube-system" Pod="coredns-787d4945fb-f8g65" WorkloadEndpoint="ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--f8g65-eth0" Feb 9 19:28:26.769000 audit[4564]: NETFILTER_CFG table=filter:123 family=2 entries=36 op=nft_register_chain pid=4564 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:28:26.769000 audit[4564]: SYSCALL arch=c000003e syscall=46 success=yes exit=19908 a0=3 a1=7ffdcff4e770 a2=0 a3=7ffdcff4e75c items=0 ppid=4364 pid=4564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:26.769000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:28:26.771709 env[1135]: time="2024-02-09T19:28:26.771594056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:28:26.771808 env[1135]: time="2024-02-09T19:28:26.771658637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:28:26.771808 env[1135]: time="2024-02-09T19:28:26.771717738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:28:26.772193 env[1135]: time="2024-02-09T19:28:26.772093153Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ecdfffdccba80dba34604ec3d325a76a280ecbf1d18b1b390ebfc1dba6cc518d pid=4563 runtime=io.containerd.runc.v2 Feb 9 19:28:26.853370 env[1135]: time="2024-02-09T19:28:26.853288451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-f8g65,Uid:856ba022-e379-4cd0-87a4-cdfa313ac255,Namespace:kube-system,Attempt:1,} returns sandbox id \"ecdfffdccba80dba34604ec3d325a76a280ecbf1d18b1b390ebfc1dba6cc518d\"" Feb 9 19:28:26.857191 env[1135]: time="2024-02-09T19:28:26.856264718Z" level=info msg="CreateContainer within sandbox \"ecdfffdccba80dba34604ec3d325a76a280ecbf1d18b1b390ebfc1dba6cc518d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 19:28:26.892039 env[1135]: time="2024-02-09T19:28:26.891990787Z" level=info msg="CreateContainer within sandbox \"ecdfffdccba80dba34604ec3d325a76a280ecbf1d18b1b390ebfc1dba6cc518d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"05a5333110b617dafd391ee21848a11b277c69bef18e187a36bb2d6ff85f34e4\"" Feb 9 19:28:26.894550 env[1135]: time="2024-02-09T19:28:26.893155694Z" level=info msg="StartContainer for \"05a5333110b617dafd391ee21848a11b277c69bef18e187a36bb2d6ff85f34e4\"" Feb 9 19:28:27.191006 env[1135]: time="2024-02-09T19:28:27.190887744Z" level=info msg="StartContainer for \"05a5333110b617dafd391ee21848a11b277c69bef18e187a36bb2d6ff85f34e4\" returns successfully" Feb 9 19:28:27.239714 systemd-networkd[1018]: vxlan.calico: Link UP Feb 9 19:28:27.239723 systemd-networkd[1018]: vxlan.calico: Gained carrier Feb 9 19:28:27.408940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4034952554.mount: Deactivated successfully. Feb 9 19:28:28.272651 kubelet[2109]: I0209 19:28:28.272582 2109 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-f8g65" podStartSLOduration=110.272477592 pod.CreationTimestamp="2024-02-09 19:26:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:28:28.233262625 +0000 UTC m=+123.924672847" watchObservedRunningTime="2024-02-09 19:28:28.272477592 +0000 UTC m=+123.963887814" Feb 9 19:28:28.380000 audit[4665]: NETFILTER_CFG table=filter:124 family=2 entries=12 op=nft_register_rule pid=4665 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:28:28.380000 audit[4665]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffdbc9efde0 a2=0 a3=7ffdbc9efdcc items=0 ppid=2269 pid=4665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:28.380000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:28:28.382000 audit[4665]: NETFILTER_CFG table=nat:125 family=2 entries=30 op=nft_register_rule pid=4665 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:28:28.382000 audit[4665]: SYSCALL arch=c000003e syscall=46 success=yes exit=8836 a0=3 a1=7ffdbc9efde0 a2=0 a3=7ffdbc9efdcc items=0 ppid=2269 pid=4665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:28.382000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:28:28.433000 audit[4691]: NETFILTER_CFG table=filter:126 family=2 entries=9 op=nft_register_rule pid=4691 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:28:28.433000 audit[4691]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffed1f31460 a2=0 a3=7ffed1f3144c items=0 ppid=2269 pid=4691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:28.433000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:28:28.435000 audit[4691]: NETFILTER_CFG table=nat:127 family=2 entries=51 op=nft_register_chain pid=4691 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:28:28.435000 audit[4691]: SYSCALL arch=c000003e syscall=46 success=yes exit=19324 a0=3 a1=7ffed1f31460 a2=0 a3=7ffed1f3144c items=0 ppid=2269 pid=4691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:28.435000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:28:28.576313 systemd-networkd[1018]: cali91d2bc7512c: Gained IPv6LL Feb 9 19:28:29.280261 systemd-networkd[1018]: vxlan.calico: Gained IPv6LL Feb 9 19:28:29.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.24.4.217:22-172.24.4.1:41036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:29.799958 systemd[1]: Started sshd@15-172.24.4.217:22-172.24.4.1:41036.service. Feb 9 19:28:31.288641 sshd[4715]: Accepted publickey for core from 172.24.4.1 port 41036 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:28:31.287000 audit[4715]: USER_ACCT pid=4715 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:31.289000 audit[4715]: CRED_ACQ pid=4715 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:31.290000 audit[4715]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe1f6b5f70 a2=3 a3=0 items=0 ppid=1 pid=4715 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:31.290000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:28:31.293167 sshd[4715]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:28:31.306099 systemd-logind[1122]: New session 16 of user core. Feb 9 19:28:31.306412 systemd[1]: Started session-16.scope. Feb 9 19:28:31.322000 audit[4715]: USER_START pid=4715 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:31.325000 audit[4718]: CRED_ACQ pid=4718 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:31.561737 env[1135]: time="2024-02-09T19:28:31.559053790Z" level=info msg="StopPodSandbox for \"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\"" Feb 9 19:28:31.747066 env[1135]: 2024-02-09 19:28:31.678 [INFO][4734] k8s.go 578: Cleaning up netns ContainerID="e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26" Feb 9 19:28:31.747066 env[1135]: 2024-02-09 19:28:31.678 [INFO][4734] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26" iface="eth0" netns="/var/run/netns/cni-22e1f607-e5e8-91a3-1f95-29a5c5423484" Feb 9 19:28:31.747066 env[1135]: 2024-02-09 19:28:31.679 [INFO][4734] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26" iface="eth0" netns="/var/run/netns/cni-22e1f607-e5e8-91a3-1f95-29a5c5423484" Feb 9 19:28:31.747066 env[1135]: 2024-02-09 19:28:31.680 [INFO][4734] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26" iface="eth0" netns="/var/run/netns/cni-22e1f607-e5e8-91a3-1f95-29a5c5423484" Feb 9 19:28:31.747066 env[1135]: 2024-02-09 19:28:31.680 [INFO][4734] k8s.go 585: Releasing IP address(es) ContainerID="e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26" Feb 9 19:28:31.747066 env[1135]: 2024-02-09 19:28:31.680 [INFO][4734] utils.go 188: Calico CNI releasing IP address ContainerID="e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26" Feb 9 19:28:31.747066 env[1135]: 2024-02-09 19:28:31.724 [INFO][4744] ipam_plugin.go 415: Releasing address using handleID ContainerID="e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26" HandleID="k8s-pod-network.e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-calico--kube--controllers--6f585564b5--stdvx-eth0" Feb 9 19:28:31.747066 env[1135]: 2024-02-09 19:28:31.724 [INFO][4744] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:28:31.747066 env[1135]: 2024-02-09 19:28:31.725 [INFO][4744] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:28:31.747066 env[1135]: 2024-02-09 19:28:31.737 [WARNING][4744] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26" HandleID="k8s-pod-network.e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-calico--kube--controllers--6f585564b5--stdvx-eth0" Feb 9 19:28:31.747066 env[1135]: 2024-02-09 19:28:31.737 [INFO][4744] ipam_plugin.go 443: Releasing address using workloadID ContainerID="e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26" HandleID="k8s-pod-network.e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-calico--kube--controllers--6f585564b5--stdvx-eth0" Feb 9 19:28:31.747066 env[1135]: 2024-02-09 19:28:31.740 [INFO][4744] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:28:31.747066 env[1135]: 2024-02-09 19:28:31.743 [INFO][4734] k8s.go 591: Teardown processing complete. ContainerID="e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26" Feb 9 19:28:31.751614 systemd[1]: run-netns-cni\x2d22e1f607\x2de5e8\x2d91a3\x2d1f95\x2d29a5c5423484.mount: Deactivated successfully. Feb 9 19:28:31.752096 env[1135]: time="2024-02-09T19:28:31.752056748Z" level=info msg="TearDown network for sandbox \"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\" successfully" Feb 9 19:28:31.752583 env[1135]: time="2024-02-09T19:28:31.752556806Z" level=info msg="StopPodSandbox for \"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\" returns successfully" Feb 9 19:28:31.753748 env[1135]: time="2024-02-09T19:28:31.753700633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f585564b5-stdvx,Uid:d9a58a43-cc9b-49e7-89c7-8d2f444dd31a,Namespace:calico-system,Attempt:1,}" Feb 9 19:28:32.047286 systemd-networkd[1018]: cali8bd96fc96a9: Link UP Feb 9 19:28:32.052146 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:28:32.052247 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali8bd96fc96a9: link becomes ready Feb 9 19:28:32.052048 systemd-networkd[1018]: cali8bd96fc96a9: Gained carrier Feb 9 19:28:32.078752 env[1135]: 2024-02-09 19:28:31.942 [INFO][4757] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--2--b--76a749f546.novalocal-k8s-calico--kube--controllers--6f585564b5--stdvx-eth0 calico-kube-controllers-6f585564b5- calico-system d9a58a43-cc9b-49e7-89c7-8d2f444dd31a 1076 0 2024-02-09 19:26:46 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6f585564b5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3510-3-2-b-76a749f546.novalocal calico-kube-controllers-6f585564b5-stdvx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8bd96fc96a9 [] []}} ContainerID="4d46a223ed749a71899b388f2fe587f04a77aa9ea8cc708a3a058342cc5b9a43" Namespace="calico-system" Pod="calico-kube-controllers-6f585564b5-stdvx" WorkloadEndpoint="ci--3510--3--2--b--76a749f546.novalocal-k8s-calico--kube--controllers--6f585564b5--stdvx-" Feb 9 19:28:32.078752 env[1135]: 2024-02-09 19:28:31.942 [INFO][4757] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="4d46a223ed749a71899b388f2fe587f04a77aa9ea8cc708a3a058342cc5b9a43" Namespace="calico-system" Pod="calico-kube-controllers-6f585564b5-stdvx" WorkloadEndpoint="ci--3510--3--2--b--76a749f546.novalocal-k8s-calico--kube--controllers--6f585564b5--stdvx-eth0" Feb 9 19:28:32.078752 env[1135]: 2024-02-09 19:28:31.985 [INFO][4773] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4d46a223ed749a71899b388f2fe587f04a77aa9ea8cc708a3a058342cc5b9a43" HandleID="k8s-pod-network.4d46a223ed749a71899b388f2fe587f04a77aa9ea8cc708a3a058342cc5b9a43" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-calico--kube--controllers--6f585564b5--stdvx-eth0" Feb 9 19:28:32.078752 env[1135]: 2024-02-09 19:28:31.998 [INFO][4773] ipam_plugin.go 268: Auto assigning IP ContainerID="4d46a223ed749a71899b388f2fe587f04a77aa9ea8cc708a3a058342cc5b9a43" HandleID="k8s-pod-network.4d46a223ed749a71899b388f2fe587f04a77aa9ea8cc708a3a058342cc5b9a43" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-calico--kube--controllers--6f585564b5--stdvx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f05c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510-3-2-b-76a749f546.novalocal", "pod":"calico-kube-controllers-6f585564b5-stdvx", "timestamp":"2024-02-09 19:28:31.985868831 +0000 UTC"}, Hostname:"ci-3510-3-2-b-76a749f546.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:28:32.078752 env[1135]: 2024-02-09 19:28:31.999 [INFO][4773] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:28:32.078752 env[1135]: 2024-02-09 19:28:31.999 [INFO][4773] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:28:32.078752 env[1135]: 2024-02-09 19:28:31.999 [INFO][4773] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-2-b-76a749f546.novalocal' Feb 9 19:28:32.078752 env[1135]: 2024-02-09 19:28:32.001 [INFO][4773] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4d46a223ed749a71899b388f2fe587f04a77aa9ea8cc708a3a058342cc5b9a43" host="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:28:32.078752 env[1135]: 2024-02-09 19:28:32.007 [INFO][4773] ipam.go 372: Looking up existing affinities for host host="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:28:32.078752 env[1135]: 2024-02-09 19:28:32.016 [INFO][4773] ipam.go 489: Trying affinity for 192.168.2.64/26 host="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:28:32.078752 env[1135]: 2024-02-09 19:28:32.019 [INFO][4773] ipam.go 155: Attempting to load block cidr=192.168.2.64/26 host="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:28:32.078752 env[1135]: 2024-02-09 19:28:32.021 [INFO][4773] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.2.64/26 host="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:28:32.078752 env[1135]: 2024-02-09 19:28:32.021 [INFO][4773] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.2.64/26 handle="k8s-pod-network.4d46a223ed749a71899b388f2fe587f04a77aa9ea8cc708a3a058342cc5b9a43" host="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:28:32.078752 env[1135]: 2024-02-09 19:28:32.024 [INFO][4773] ipam.go 1682: Creating new handle: k8s-pod-network.4d46a223ed749a71899b388f2fe587f04a77aa9ea8cc708a3a058342cc5b9a43 Feb 9 19:28:32.078752 env[1135]: 2024-02-09 19:28:32.029 [INFO][4773] ipam.go 1203: Writing block in order to claim IPs block=192.168.2.64/26 handle="k8s-pod-network.4d46a223ed749a71899b388f2fe587f04a77aa9ea8cc708a3a058342cc5b9a43" host="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:28:32.078752 env[1135]: 2024-02-09 19:28:32.036 [INFO][4773] ipam.go 1216: Successfully claimed IPs: [192.168.2.66/26] block=192.168.2.64/26 handle="k8s-pod-network.4d46a223ed749a71899b388f2fe587f04a77aa9ea8cc708a3a058342cc5b9a43" host="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:28:32.078752 env[1135]: 2024-02-09 19:28:32.036 [INFO][4773] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.2.66/26] handle="k8s-pod-network.4d46a223ed749a71899b388f2fe587f04a77aa9ea8cc708a3a058342cc5b9a43" host="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:28:32.078752 env[1135]: 2024-02-09 19:28:32.036 [INFO][4773] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:28:32.078752 env[1135]: 2024-02-09 19:28:32.036 [INFO][4773] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.2.66/26] IPv6=[] ContainerID="4d46a223ed749a71899b388f2fe587f04a77aa9ea8cc708a3a058342cc5b9a43" HandleID="k8s-pod-network.4d46a223ed749a71899b388f2fe587f04a77aa9ea8cc708a3a058342cc5b9a43" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-calico--kube--controllers--6f585564b5--stdvx-eth0" Feb 9 19:28:32.080581 env[1135]: 2024-02-09 19:28:32.039 [INFO][4757] k8s.go 385: Populated endpoint ContainerID="4d46a223ed749a71899b388f2fe587f04a77aa9ea8cc708a3a058342cc5b9a43" Namespace="calico-system" Pod="calico-kube-controllers-6f585564b5-stdvx" WorkloadEndpoint="ci--3510--3--2--b--76a749f546.novalocal-k8s-calico--kube--controllers--6f585564b5--stdvx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--b--76a749f546.novalocal-k8s-calico--kube--controllers--6f585564b5--stdvx-eth0", GenerateName:"calico-kube-controllers-6f585564b5-", Namespace:"calico-system", SelfLink:"", UID:"d9a58a43-cc9b-49e7-89c7-8d2f444dd31a", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 26, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f585564b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-b-76a749f546.novalocal", ContainerID:"", Pod:"calico-kube-controllers-6f585564b5-stdvx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.2.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8bd96fc96a9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:28:32.080581 env[1135]: 2024-02-09 19:28:32.039 [INFO][4757] k8s.go 386: Calico CNI using IPs: [192.168.2.66/32] ContainerID="4d46a223ed749a71899b388f2fe587f04a77aa9ea8cc708a3a058342cc5b9a43" Namespace="calico-system" Pod="calico-kube-controllers-6f585564b5-stdvx" WorkloadEndpoint="ci--3510--3--2--b--76a749f546.novalocal-k8s-calico--kube--controllers--6f585564b5--stdvx-eth0" Feb 9 19:28:32.080581 env[1135]: 2024-02-09 19:28:32.039 [INFO][4757] dataplane_linux.go 68: Setting the host side veth name to cali8bd96fc96a9 ContainerID="4d46a223ed749a71899b388f2fe587f04a77aa9ea8cc708a3a058342cc5b9a43" Namespace="calico-system" Pod="calico-kube-controllers-6f585564b5-stdvx" WorkloadEndpoint="ci--3510--3--2--b--76a749f546.novalocal-k8s-calico--kube--controllers--6f585564b5--stdvx-eth0" Feb 9 19:28:32.080581 env[1135]: 2024-02-09 19:28:32.053 [INFO][4757] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="4d46a223ed749a71899b388f2fe587f04a77aa9ea8cc708a3a058342cc5b9a43" Namespace="calico-system" Pod="calico-kube-controllers-6f585564b5-stdvx" WorkloadEndpoint="ci--3510--3--2--b--76a749f546.novalocal-k8s-calico--kube--controllers--6f585564b5--stdvx-eth0" Feb 9 19:28:32.080581 env[1135]: 2024-02-09 19:28:32.057 [INFO][4757] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="4d46a223ed749a71899b388f2fe587f04a77aa9ea8cc708a3a058342cc5b9a43" Namespace="calico-system" Pod="calico-kube-controllers-6f585564b5-stdvx" WorkloadEndpoint="ci--3510--3--2--b--76a749f546.novalocal-k8s-calico--kube--controllers--6f585564b5--stdvx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--b--76a749f546.novalocal-k8s-calico--kube--controllers--6f585564b5--stdvx-eth0", GenerateName:"calico-kube-controllers-6f585564b5-", Namespace:"calico-system", SelfLink:"", UID:"d9a58a43-cc9b-49e7-89c7-8d2f444dd31a", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 26, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f585564b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-b-76a749f546.novalocal", ContainerID:"4d46a223ed749a71899b388f2fe587f04a77aa9ea8cc708a3a058342cc5b9a43", Pod:"calico-kube-controllers-6f585564b5-stdvx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.2.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8bd96fc96a9", MAC:"56:f6:3a:8f:b5:dc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:28:32.080581 env[1135]: 2024-02-09 19:28:32.071 [INFO][4757] k8s.go 491: Wrote updated endpoint to datastore ContainerID="4d46a223ed749a71899b388f2fe587f04a77aa9ea8cc708a3a058342cc5b9a43" Namespace="calico-system" Pod="calico-kube-controllers-6f585564b5-stdvx" WorkloadEndpoint="ci--3510--3--2--b--76a749f546.novalocal-k8s-calico--kube--controllers--6f585564b5--stdvx-eth0" Feb 9 19:28:32.108342 kernel: kauditd_printk_skb: 25 callbacks suppressed Feb 9 19:28:32.108453 kernel: audit: type=1325 audit(1707506912.103:404): table=filter:128 family=2 entries=40 op=nft_register_chain pid=4795 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:28:32.103000 audit[4795]: NETFILTER_CFG table=filter:128 family=2 entries=40 op=nft_register_chain pid=4795 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:28:32.103000 audit[4795]: SYSCALL arch=c000003e syscall=46 success=yes exit=21096 a0=3 a1=7ffe3b5e7600 a2=0 a3=7ffe3b5e75ec items=0 ppid=4364 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:32.116185 kernel: audit: type=1300 audit(1707506912.103:404): arch=c000003e syscall=46 success=yes exit=21096 a0=3 a1=7ffe3b5e7600 a2=0 a3=7ffe3b5e75ec items=0 ppid=4364 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:32.103000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:28:32.122003 kernel: audit: type=1327 audit(1707506912.103:404): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:28:32.122625 env[1135]: time="2024-02-09T19:28:32.121883337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:28:32.122768 env[1135]: time="2024-02-09T19:28:32.122741979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:28:32.122883 env[1135]: time="2024-02-09T19:28:32.122859440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:28:32.123213 env[1135]: time="2024-02-09T19:28:32.123182275Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4d46a223ed749a71899b388f2fe587f04a77aa9ea8cc708a3a058342cc5b9a43 pid=4802 runtime=io.containerd.runc.v2 Feb 9 19:28:32.239226 env[1135]: time="2024-02-09T19:28:32.239182460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f585564b5-stdvx,Uid:d9a58a43-cc9b-49e7-89c7-8d2f444dd31a,Namespace:calico-system,Attempt:1,} returns sandbox id \"4d46a223ed749a71899b388f2fe587f04a77aa9ea8cc708a3a058342cc5b9a43\"" Feb 9 19:28:32.242931 env[1135]: time="2024-02-09T19:28:32.242883508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.27.0\"" Feb 9 19:28:32.467664 sshd[4715]: pam_unix(sshd:session): session closed for user core Feb 9 19:28:32.467000 audit[4715]: USER_END pid=4715 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:32.468000 audit[4715]: CRED_DISP pid=4715 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:32.491056 kernel: audit: type=1106 audit(1707506912.467:405): pid=4715 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:32.491168 kernel: audit: type=1104 audit(1707506912.468:406): pid=4715 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:32.492048 systemd[1]: sshd@15-172.24.4.217:22-172.24.4.1:41036.service: Deactivated successfully. Feb 9 19:28:32.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.24.4.217:22-172.24.4.1:41036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:32.493664 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 19:28:32.496953 kernel: audit: type=1131 audit(1707506912.491:407): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.24.4.217:22-172.24.4.1:41036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:32.497728 systemd-logind[1122]: Session 16 logged out. Waiting for processes to exit. Feb 9 19:28:32.499037 systemd-logind[1122]: Removed session 16. Feb 9 19:28:33.558658 env[1135]: time="2024-02-09T19:28:33.558567921Z" level=info msg="StopPodSandbox for \"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\"" Feb 9 19:28:33.727375 env[1135]: 2024-02-09 19:28:33.672 [INFO][4852] k8s.go 578: Cleaning up netns ContainerID="c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281" Feb 9 19:28:33.727375 env[1135]: 2024-02-09 19:28:33.672 [INFO][4852] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281" iface="eth0" netns="/var/run/netns/cni-a3e0731a-4f09-fd88-f047-c26511043663" Feb 9 19:28:33.727375 env[1135]: 2024-02-09 19:28:33.672 [INFO][4852] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281" iface="eth0" netns="/var/run/netns/cni-a3e0731a-4f09-fd88-f047-c26511043663" Feb 9 19:28:33.727375 env[1135]: 2024-02-09 19:28:33.672 [INFO][4852] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281" iface="eth0" netns="/var/run/netns/cni-a3e0731a-4f09-fd88-f047-c26511043663" Feb 9 19:28:33.727375 env[1135]: 2024-02-09 19:28:33.673 [INFO][4852] k8s.go 585: Releasing IP address(es) ContainerID="c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281" Feb 9 19:28:33.727375 env[1135]: 2024-02-09 19:28:33.673 [INFO][4852] utils.go 188: Calico CNI releasing IP address ContainerID="c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281" Feb 9 19:28:33.727375 env[1135]: 2024-02-09 19:28:33.702 [INFO][4858] ipam_plugin.go 415: Releasing address using handleID ContainerID="c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281" HandleID="k8s-pod-network.c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--gcw56-eth0" Feb 9 19:28:33.727375 env[1135]: 2024-02-09 19:28:33.703 [INFO][4858] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:28:33.727375 env[1135]: 2024-02-09 19:28:33.703 [INFO][4858] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:28:33.727375 env[1135]: 2024-02-09 19:28:33.719 [WARNING][4858] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281" HandleID="k8s-pod-network.c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--gcw56-eth0" Feb 9 19:28:33.727375 env[1135]: 2024-02-09 19:28:33.719 [INFO][4858] ipam_plugin.go 443: Releasing address using workloadID ContainerID="c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281" HandleID="k8s-pod-network.c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--gcw56-eth0" Feb 9 19:28:33.727375 env[1135]: 2024-02-09 19:28:33.721 [INFO][4858] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:28:33.727375 env[1135]: 2024-02-09 19:28:33.724 [INFO][4852] k8s.go 591: Teardown processing complete. ContainerID="c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281" Feb 9 19:28:33.737451 env[1135]: time="2024-02-09T19:28:33.735591856Z" level=info msg="TearDown network for sandbox \"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\" successfully" Feb 9 19:28:33.737451 env[1135]: time="2024-02-09T19:28:33.735667037Z" level=info msg="StopPodSandbox for \"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\" returns successfully" Feb 9 19:28:33.735141 systemd[1]: run-netns-cni\x2da3e0731a\x2d4f09\x2dfd88\x2df047\x2dc26511043663.mount: Deactivated successfully. Feb 9 19:28:33.738556 env[1135]: time="2024-02-09T19:28:33.738425656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-gcw56,Uid:93fdcc9f-1773-437d-8e12-9052cf2f26e5,Namespace:kube-system,Attempt:1,}" Feb 9 19:28:33.824837 systemd-networkd[1018]: cali8bd96fc96a9: Gained IPv6LL Feb 9 19:28:33.938428 systemd-networkd[1018]: cali6e99fa2c140: Link UP Feb 9 19:28:33.940537 systemd-networkd[1018]: cali6e99fa2c140: Gained carrier Feb 9 19:28:33.941008 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali6e99fa2c140: link becomes ready Feb 9 19:28:33.960950 env[1135]: 2024-02-09 19:28:33.860 [INFO][4864] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--gcw56-eth0 coredns-787d4945fb- kube-system 93fdcc9f-1773-437d-8e12-9052cf2f26e5 1087 0 2024-02-09 19:26:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:787d4945fb projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510-3-2-b-76a749f546.novalocal coredns-787d4945fb-gcw56 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6e99fa2c140 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="0345642b477cdaa855cf4c3f2c0cad73f4d6cbdaa2eeb3dc5c189f2878a038d1" Namespace="kube-system" Pod="coredns-787d4945fb-gcw56" WorkloadEndpoint="ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--gcw56-" Feb 9 19:28:33.960950 env[1135]: 2024-02-09 19:28:33.861 [INFO][4864] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="0345642b477cdaa855cf4c3f2c0cad73f4d6cbdaa2eeb3dc5c189f2878a038d1" Namespace="kube-system" Pod="coredns-787d4945fb-gcw56" WorkloadEndpoint="ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--gcw56-eth0" Feb 9 19:28:33.960950 env[1135]: 2024-02-09 19:28:33.890 [INFO][4875] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0345642b477cdaa855cf4c3f2c0cad73f4d6cbdaa2eeb3dc5c189f2878a038d1" HandleID="k8s-pod-network.0345642b477cdaa855cf4c3f2c0cad73f4d6cbdaa2eeb3dc5c189f2878a038d1" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--gcw56-eth0" Feb 9 19:28:33.960950 env[1135]: 2024-02-09 19:28:33.904 [INFO][4875] ipam_plugin.go 268: Auto assigning IP ContainerID="0345642b477cdaa855cf4c3f2c0cad73f4d6cbdaa2eeb3dc5c189f2878a038d1" HandleID="k8s-pod-network.0345642b477cdaa855cf4c3f2c0cad73f4d6cbdaa2eeb3dc5c189f2878a038d1" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--gcw56-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000c2a60), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510-3-2-b-76a749f546.novalocal", "pod":"coredns-787d4945fb-gcw56", "timestamp":"2024-02-09 19:28:33.890641631 +0000 UTC"}, Hostname:"ci-3510-3-2-b-76a749f546.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:28:33.960950 env[1135]: 2024-02-09 19:28:33.904 [INFO][4875] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:28:33.960950 env[1135]: 2024-02-09 19:28:33.904 [INFO][4875] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:28:33.960950 env[1135]: 2024-02-09 19:28:33.905 [INFO][4875] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-2-b-76a749f546.novalocal' Feb 9 19:28:33.960950 env[1135]: 2024-02-09 19:28:33.907 [INFO][4875] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0345642b477cdaa855cf4c3f2c0cad73f4d6cbdaa2eeb3dc5c189f2878a038d1" host="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:28:33.960950 env[1135]: 2024-02-09 19:28:33.911 [INFO][4875] ipam.go 372: Looking up existing affinities for host host="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:28:33.960950 env[1135]: 2024-02-09 19:28:33.916 [INFO][4875] ipam.go 489: Trying affinity for 192.168.2.64/26 host="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:28:33.960950 env[1135]: 2024-02-09 19:28:33.918 [INFO][4875] ipam.go 155: Attempting to load block cidr=192.168.2.64/26 host="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:28:33.960950 env[1135]: 2024-02-09 19:28:33.920 [INFO][4875] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.2.64/26 host="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:28:33.960950 env[1135]: 2024-02-09 19:28:33.920 [INFO][4875] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.2.64/26 handle="k8s-pod-network.0345642b477cdaa855cf4c3f2c0cad73f4d6cbdaa2eeb3dc5c189f2878a038d1" host="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:28:33.960950 env[1135]: 2024-02-09 19:28:33.922 [INFO][4875] ipam.go 1682: Creating new handle: k8s-pod-network.0345642b477cdaa855cf4c3f2c0cad73f4d6cbdaa2eeb3dc5c189f2878a038d1 Feb 9 19:28:33.960950 env[1135]: 2024-02-09 19:28:33.926 [INFO][4875] ipam.go 1203: Writing block in order to claim IPs block=192.168.2.64/26 handle="k8s-pod-network.0345642b477cdaa855cf4c3f2c0cad73f4d6cbdaa2eeb3dc5c189f2878a038d1" host="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:28:33.960950 env[1135]: 2024-02-09 19:28:33.933 [INFO][4875] ipam.go 1216: Successfully claimed IPs: [192.168.2.67/26] block=192.168.2.64/26 handle="k8s-pod-network.0345642b477cdaa855cf4c3f2c0cad73f4d6cbdaa2eeb3dc5c189f2878a038d1" host="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:28:33.960950 env[1135]: 2024-02-09 19:28:33.933 [INFO][4875] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.2.67/26] handle="k8s-pod-network.0345642b477cdaa855cf4c3f2c0cad73f4d6cbdaa2eeb3dc5c189f2878a038d1" host="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:28:33.960950 env[1135]: 2024-02-09 19:28:33.933 [INFO][4875] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:28:33.960950 env[1135]: 2024-02-09 19:28:33.933 [INFO][4875] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.2.67/26] IPv6=[] ContainerID="0345642b477cdaa855cf4c3f2c0cad73f4d6cbdaa2eeb3dc5c189f2878a038d1" HandleID="k8s-pod-network.0345642b477cdaa855cf4c3f2c0cad73f4d6cbdaa2eeb3dc5c189f2878a038d1" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--gcw56-eth0" Feb 9 19:28:33.961625 env[1135]: 2024-02-09 19:28:33.935 [INFO][4864] k8s.go 385: Populated endpoint ContainerID="0345642b477cdaa855cf4c3f2c0cad73f4d6cbdaa2eeb3dc5c189f2878a038d1" Namespace="kube-system" Pod="coredns-787d4945fb-gcw56" WorkloadEndpoint="ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--gcw56-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--gcw56-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"93fdcc9f-1773-437d-8e12-9052cf2f26e5", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 26, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-b-76a749f546.novalocal", ContainerID:"", Pod:"coredns-787d4945fb-gcw56", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6e99fa2c140", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:28:33.961625 env[1135]: 2024-02-09 19:28:33.936 [INFO][4864] k8s.go 386: Calico CNI using IPs: [192.168.2.67/32] ContainerID="0345642b477cdaa855cf4c3f2c0cad73f4d6cbdaa2eeb3dc5c189f2878a038d1" Namespace="kube-system" Pod="coredns-787d4945fb-gcw56" WorkloadEndpoint="ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--gcw56-eth0" Feb 9 19:28:33.961625 env[1135]: 2024-02-09 19:28:33.936 [INFO][4864] dataplane_linux.go 68: Setting the host side veth name to cali6e99fa2c140 ContainerID="0345642b477cdaa855cf4c3f2c0cad73f4d6cbdaa2eeb3dc5c189f2878a038d1" Namespace="kube-system" Pod="coredns-787d4945fb-gcw56" WorkloadEndpoint="ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--gcw56-eth0" Feb 9 19:28:33.961625 env[1135]: 2024-02-09 19:28:33.938 [INFO][4864] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="0345642b477cdaa855cf4c3f2c0cad73f4d6cbdaa2eeb3dc5c189f2878a038d1" Namespace="kube-system" Pod="coredns-787d4945fb-gcw56" WorkloadEndpoint="ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--gcw56-eth0" Feb 9 19:28:33.961625 env[1135]: 2024-02-09 19:28:33.942 [INFO][4864] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="0345642b477cdaa855cf4c3f2c0cad73f4d6cbdaa2eeb3dc5c189f2878a038d1" Namespace="kube-system" Pod="coredns-787d4945fb-gcw56" WorkloadEndpoint="ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--gcw56-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--gcw56-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"93fdcc9f-1773-437d-8e12-9052cf2f26e5", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 26, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-b-76a749f546.novalocal", ContainerID:"0345642b477cdaa855cf4c3f2c0cad73f4d6cbdaa2eeb3dc5c189f2878a038d1", Pod:"coredns-787d4945fb-gcw56", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6e99fa2c140", MAC:"22:68:0c:04:59:31", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:28:33.961625 env[1135]: 2024-02-09 19:28:33.958 [INFO][4864] k8s.go 491: Wrote updated endpoint to datastore ContainerID="0345642b477cdaa855cf4c3f2c0cad73f4d6cbdaa2eeb3dc5c189f2878a038d1" Namespace="kube-system" Pod="coredns-787d4945fb-gcw56" WorkloadEndpoint="ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--gcw56-eth0" Feb 9 19:28:33.972000 audit[4895]: NETFILTER_CFG table=filter:129 family=2 entries=34 op=nft_register_chain pid=4895 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:28:33.981520 kernel: audit: type=1325 audit(1707506913.972:408): table=filter:129 family=2 entries=34 op=nft_register_chain pid=4895 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:28:33.981608 kernel: audit: type=1300 audit(1707506913.972:408): arch=c000003e syscall=46 success=yes exit=17900 a0=3 a1=7ffec727c400 a2=0 a3=7ffec727c3ec items=0 ppid=4364 pid=4895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:33.981634 kernel: audit: type=1327 audit(1707506913.972:408): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:28:33.972000 audit[4895]: SYSCALL arch=c000003e syscall=46 success=yes exit=17900 a0=3 a1=7ffec727c400 a2=0 a3=7ffec727c3ec items=0 ppid=4364 pid=4895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:33.972000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:28:33.994201 env[1135]: time="2024-02-09T19:28:33.994135878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:28:33.994396 env[1135]: time="2024-02-09T19:28:33.994370418Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:28:33.994492 env[1135]: time="2024-02-09T19:28:33.994469916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:28:33.994774 env[1135]: time="2024-02-09T19:28:33.994717069Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0345642b477cdaa855cf4c3f2c0cad73f4d6cbdaa2eeb3dc5c189f2878a038d1 pid=4902 runtime=io.containerd.runc.v2 Feb 9 19:28:34.068594 env[1135]: time="2024-02-09T19:28:34.068509650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-gcw56,Uid:93fdcc9f-1773-437d-8e12-9052cf2f26e5,Namespace:kube-system,Attempt:1,} returns sandbox id \"0345642b477cdaa855cf4c3f2c0cad73f4d6cbdaa2eeb3dc5c189f2878a038d1\"" Feb 9 19:28:34.075454 env[1135]: time="2024-02-09T19:28:34.075408831Z" level=info msg="CreateContainer within sandbox \"0345642b477cdaa855cf4c3f2c0cad73f4d6cbdaa2eeb3dc5c189f2878a038d1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 19:28:34.095093 env[1135]: time="2024-02-09T19:28:34.093871548Z" level=info msg="CreateContainer within sandbox \"0345642b477cdaa855cf4c3f2c0cad73f4d6cbdaa2eeb3dc5c189f2878a038d1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3e960c8eb81c6bc61048a9925d1533f82a9ed2999f95b89cf231826c3744b256\"" Feb 9 19:28:34.096172 env[1135]: time="2024-02-09T19:28:34.096116683Z" level=info msg="StartContainer for \"3e960c8eb81c6bc61048a9925d1533f82a9ed2999f95b89cf231826c3744b256\"" Feb 9 19:28:34.154446 env[1135]: time="2024-02-09T19:28:34.154331481Z" level=info msg="StartContainer for \"3e960c8eb81c6bc61048a9925d1533f82a9ed2999f95b89cf231826c3744b256\" returns successfully" Feb 9 19:28:34.295000 audit[5002]: NETFILTER_CFG table=filter:130 family=2 entries=6 op=nft_register_rule pid=5002 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:28:34.304009 kernel: audit: type=1325 audit(1707506914.295:409): table=filter:130 family=2 entries=6 op=nft_register_rule pid=5002 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:28:34.295000 audit[5002]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffc57bfe600 a2=0 a3=7ffc57bfe5ec items=0 ppid=2269 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:34.295000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:28:34.313000 audit[5002]: NETFILTER_CFG table=nat:131 family=2 entries=60 op=nft_register_rule pid=5002 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:28:34.313000 audit[5002]: SYSCALL arch=c000003e syscall=46 success=yes exit=19324 a0=3 a1=7ffc57bfe600 a2=0 a3=7ffc57bfe5ec items=0 ppid=2269 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:34.313000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:28:34.737066 systemd[1]: run-containerd-runc-k8s.io-0345642b477cdaa855cf4c3f2c0cad73f4d6cbdaa2eeb3dc5c189f2878a038d1-runc.S3ZafW.mount: Deactivated successfully. Feb 9 19:28:35.260716 kubelet[2109]: I0209 19:28:35.256792 2109 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-gcw56" podStartSLOduration=117.256691321 pod.CreationTimestamp="2024-02-09 19:26:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:28:34.242726602 +0000 UTC m=+129.934136775" watchObservedRunningTime="2024-02-09 19:28:35.256691321 +0000 UTC m=+130.948101543" Feb 9 19:28:35.358000 audit[5029]: NETFILTER_CFG table=filter:132 family=2 entries=6 op=nft_register_rule pid=5029 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:28:35.358000 audit[5029]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffeec4a0ea0 a2=0 a3=7ffeec4a0e8c items=0 ppid=2269 pid=5029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:35.358000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:28:35.361085 systemd-networkd[1018]: cali6e99fa2c140: Gained IPv6LL Feb 9 19:28:35.377000 audit[5029]: NETFILTER_CFG table=nat:133 family=2 entries=72 op=nft_register_chain pid=5029 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:28:35.377000 audit[5029]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffeec4a0ea0 a2=0 a3=7ffeec4a0e8c items=0 ppid=2269 pid=5029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:35.377000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:28:35.558998 env[1135]: time="2024-02-09T19:28:35.558406494Z" level=info msg="StopPodSandbox for \"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\"" Feb 9 19:28:35.695238 env[1135]: 2024-02-09 19:28:35.625 [INFO][5047] k8s.go 578: Cleaning up netns ContainerID="8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e" Feb 9 19:28:35.695238 env[1135]: 2024-02-09 19:28:35.625 [INFO][5047] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e" iface="eth0" netns="/var/run/netns/cni-1c96232e-0e6c-99f7-1b53-8013c6efaa18" Feb 9 19:28:35.695238 env[1135]: 2024-02-09 19:28:35.625 [INFO][5047] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e" iface="eth0" netns="/var/run/netns/cni-1c96232e-0e6c-99f7-1b53-8013c6efaa18" Feb 9 19:28:35.695238 env[1135]: 2024-02-09 19:28:35.626 [INFO][5047] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e" iface="eth0" netns="/var/run/netns/cni-1c96232e-0e6c-99f7-1b53-8013c6efaa18" Feb 9 19:28:35.695238 env[1135]: 2024-02-09 19:28:35.626 [INFO][5047] k8s.go 585: Releasing IP address(es) ContainerID="8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e" Feb 9 19:28:35.695238 env[1135]: 2024-02-09 19:28:35.626 [INFO][5047] utils.go 188: Calico CNI releasing IP address ContainerID="8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e" Feb 9 19:28:35.695238 env[1135]: 2024-02-09 19:28:35.678 [INFO][5054] ipam_plugin.go 415: Releasing address using handleID ContainerID="8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e" HandleID="k8s-pod-network.8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-csi--node--driver--x2vsr-eth0" Feb 9 19:28:35.695238 env[1135]: 2024-02-09 19:28:35.678 [INFO][5054] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:28:35.695238 env[1135]: 2024-02-09 19:28:35.679 [INFO][5054] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:28:35.695238 env[1135]: 2024-02-09 19:28:35.689 [WARNING][5054] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e" HandleID="k8s-pod-network.8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-csi--node--driver--x2vsr-eth0" Feb 9 19:28:35.695238 env[1135]: 2024-02-09 19:28:35.689 [INFO][5054] ipam_plugin.go 443: Releasing address using workloadID ContainerID="8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e" HandleID="k8s-pod-network.8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-csi--node--driver--x2vsr-eth0" Feb 9 19:28:35.695238 env[1135]: 2024-02-09 19:28:35.691 [INFO][5054] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:28:35.695238 env[1135]: 2024-02-09 19:28:35.693 [INFO][5047] k8s.go 591: Teardown processing complete. ContainerID="8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e" Feb 9 19:28:35.701561 systemd[1]: run-netns-cni\x2d1c96232e\x2d0e6c\x2d99f7\x2d1b53\x2d8013c6efaa18.mount: Deactivated successfully. Feb 9 19:28:35.702761 env[1135]: time="2024-02-09T19:28:35.702725301Z" level=info msg="TearDown network for sandbox \"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\" successfully" Feb 9 19:28:35.702851 env[1135]: time="2024-02-09T19:28:35.702830639Z" level=info msg="StopPodSandbox for \"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\" returns successfully" Feb 9 19:28:35.703614 env[1135]: time="2024-02-09T19:28:35.703589393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x2vsr,Uid:258b5f6f-f507-494e-8282-83a91907d3f5,Namespace:calico-system,Attempt:1,}" Feb 9 19:28:35.909068 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:28:35.909185 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali3f586c8501f: link becomes ready Feb 9 19:28:35.907973 systemd-networkd[1018]: cali3f586c8501f: Link UP Feb 9 19:28:35.909250 systemd-networkd[1018]: cali3f586c8501f: Gained carrier Feb 9 19:28:35.932292 env[1135]: 2024-02-09 19:28:35.793 [INFO][5061] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--2--b--76a749f546.novalocal-k8s-csi--node--driver--x2vsr-eth0 csi-node-driver- calico-system 258b5f6f-f507-494e-8282-83a91907d3f5 1107 0 2024-02-09 19:26:45 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7c77f88967 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-3510-3-2-b-76a749f546.novalocal csi-node-driver-x2vsr eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali3f586c8501f [] []}} ContainerID="5c25219bb93d88f78ee9f3505599e366f70593133d36cbee9c3a8ab6729820a3" Namespace="calico-system" Pod="csi-node-driver-x2vsr" WorkloadEndpoint="ci--3510--3--2--b--76a749f546.novalocal-k8s-csi--node--driver--x2vsr-" Feb 9 19:28:35.932292 env[1135]: 2024-02-09 19:28:35.793 [INFO][5061] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="5c25219bb93d88f78ee9f3505599e366f70593133d36cbee9c3a8ab6729820a3" Namespace="calico-system" Pod="csi-node-driver-x2vsr" WorkloadEndpoint="ci--3510--3--2--b--76a749f546.novalocal-k8s-csi--node--driver--x2vsr-eth0" Feb 9 19:28:35.932292 env[1135]: 2024-02-09 19:28:35.835 [INFO][5073] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5c25219bb93d88f78ee9f3505599e366f70593133d36cbee9c3a8ab6729820a3" HandleID="k8s-pod-network.5c25219bb93d88f78ee9f3505599e366f70593133d36cbee9c3a8ab6729820a3" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-csi--node--driver--x2vsr-eth0" Feb 9 19:28:35.932292 env[1135]: 2024-02-09 19:28:35.856 [INFO][5073] ipam_plugin.go 268: Auto assigning IP ContainerID="5c25219bb93d88f78ee9f3505599e366f70593133d36cbee9c3a8ab6729820a3" HandleID="k8s-pod-network.5c25219bb93d88f78ee9f3505599e366f70593133d36cbee9c3a8ab6729820a3" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-csi--node--driver--x2vsr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050510), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510-3-2-b-76a749f546.novalocal", "pod":"csi-node-driver-x2vsr", "timestamp":"2024-02-09 19:28:35.835378961 +0000 UTC"}, Hostname:"ci-3510-3-2-b-76a749f546.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:28:35.932292 env[1135]: 2024-02-09 19:28:35.856 [INFO][5073] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:28:35.932292 env[1135]: 2024-02-09 19:28:35.856 [INFO][5073] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:28:35.932292 env[1135]: 2024-02-09 19:28:35.856 [INFO][5073] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-2-b-76a749f546.novalocal' Feb 9 19:28:35.932292 env[1135]: 2024-02-09 19:28:35.858 [INFO][5073] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5c25219bb93d88f78ee9f3505599e366f70593133d36cbee9c3a8ab6729820a3" host="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:28:35.932292 env[1135]: 2024-02-09 19:28:35.864 [INFO][5073] ipam.go 372: Looking up existing affinities for host host="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:28:35.932292 env[1135]: 2024-02-09 19:28:35.875 [INFO][5073] ipam.go 489: Trying affinity for 192.168.2.64/26 host="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:28:35.932292 env[1135]: 2024-02-09 19:28:35.877 [INFO][5073] ipam.go 155: Attempting to load block cidr=192.168.2.64/26 host="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:28:35.932292 env[1135]: 2024-02-09 19:28:35.881 [INFO][5073] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.2.64/26 host="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:28:35.932292 env[1135]: 2024-02-09 19:28:35.881 [INFO][5073] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.2.64/26 handle="k8s-pod-network.5c25219bb93d88f78ee9f3505599e366f70593133d36cbee9c3a8ab6729820a3" host="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:28:35.932292 env[1135]: 2024-02-09 19:28:35.885 [INFO][5073] ipam.go 1682: Creating new handle: k8s-pod-network.5c25219bb93d88f78ee9f3505599e366f70593133d36cbee9c3a8ab6729820a3 Feb 9 19:28:35.932292 env[1135]: 2024-02-09 19:28:35.891 [INFO][5073] ipam.go 1203: Writing block in order to claim IPs block=192.168.2.64/26 handle="k8s-pod-network.5c25219bb93d88f78ee9f3505599e366f70593133d36cbee9c3a8ab6729820a3" host="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:28:35.932292 env[1135]: 2024-02-09 19:28:35.901 [INFO][5073] ipam.go 1216: Successfully claimed IPs: [192.168.2.68/26] block=192.168.2.64/26 handle="k8s-pod-network.5c25219bb93d88f78ee9f3505599e366f70593133d36cbee9c3a8ab6729820a3" host="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:28:35.932292 env[1135]: 2024-02-09 19:28:35.901 [INFO][5073] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.2.68/26] handle="k8s-pod-network.5c25219bb93d88f78ee9f3505599e366f70593133d36cbee9c3a8ab6729820a3" host="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:28:35.932292 env[1135]: 2024-02-09 19:28:35.902 [INFO][5073] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:28:35.932292 env[1135]: 2024-02-09 19:28:35.902 [INFO][5073] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.2.68/26] IPv6=[] ContainerID="5c25219bb93d88f78ee9f3505599e366f70593133d36cbee9c3a8ab6729820a3" HandleID="k8s-pod-network.5c25219bb93d88f78ee9f3505599e366f70593133d36cbee9c3a8ab6729820a3" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-csi--node--driver--x2vsr-eth0" Feb 9 19:28:35.935508 env[1135]: 2024-02-09 19:28:35.904 [INFO][5061] k8s.go 385: Populated endpoint ContainerID="5c25219bb93d88f78ee9f3505599e366f70593133d36cbee9c3a8ab6729820a3" Namespace="calico-system" Pod="csi-node-driver-x2vsr" WorkloadEndpoint="ci--3510--3--2--b--76a749f546.novalocal-k8s-csi--node--driver--x2vsr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--b--76a749f546.novalocal-k8s-csi--node--driver--x2vsr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"258b5f6f-f507-494e-8282-83a91907d3f5", ResourceVersion:"1107", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 26, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-b-76a749f546.novalocal", ContainerID:"", Pod:"csi-node-driver-x2vsr", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.2.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali3f586c8501f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:28:35.935508 env[1135]: 2024-02-09 19:28:35.905 [INFO][5061] k8s.go 386: Calico CNI using IPs: [192.168.2.68/32] ContainerID="5c25219bb93d88f78ee9f3505599e366f70593133d36cbee9c3a8ab6729820a3" Namespace="calico-system" Pod="csi-node-driver-x2vsr" WorkloadEndpoint="ci--3510--3--2--b--76a749f546.novalocal-k8s-csi--node--driver--x2vsr-eth0" Feb 9 19:28:35.935508 env[1135]: 2024-02-09 19:28:35.905 [INFO][5061] dataplane_linux.go 68: Setting the host side veth name to cali3f586c8501f ContainerID="5c25219bb93d88f78ee9f3505599e366f70593133d36cbee9c3a8ab6729820a3" Namespace="calico-system" Pod="csi-node-driver-x2vsr" WorkloadEndpoint="ci--3510--3--2--b--76a749f546.novalocal-k8s-csi--node--driver--x2vsr-eth0" Feb 9 19:28:35.935508 env[1135]: 2024-02-09 19:28:35.909 [INFO][5061] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="5c25219bb93d88f78ee9f3505599e366f70593133d36cbee9c3a8ab6729820a3" Namespace="calico-system" Pod="csi-node-driver-x2vsr" WorkloadEndpoint="ci--3510--3--2--b--76a749f546.novalocal-k8s-csi--node--driver--x2vsr-eth0" Feb 9 19:28:35.935508 env[1135]: 2024-02-09 19:28:35.909 [INFO][5061] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="5c25219bb93d88f78ee9f3505599e366f70593133d36cbee9c3a8ab6729820a3" Namespace="calico-system" Pod="csi-node-driver-x2vsr" WorkloadEndpoint="ci--3510--3--2--b--76a749f546.novalocal-k8s-csi--node--driver--x2vsr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--b--76a749f546.novalocal-k8s-csi--node--driver--x2vsr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"258b5f6f-f507-494e-8282-83a91907d3f5", ResourceVersion:"1107", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 26, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-b-76a749f546.novalocal", ContainerID:"5c25219bb93d88f78ee9f3505599e366f70593133d36cbee9c3a8ab6729820a3", Pod:"csi-node-driver-x2vsr", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.2.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali3f586c8501f", MAC:"8a:b2:be:e9:33:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:28:35.935508 env[1135]: 2024-02-09 19:28:35.928 [INFO][5061] k8s.go 491: Wrote updated endpoint to datastore ContainerID="5c25219bb93d88f78ee9f3505599e366f70593133d36cbee9c3a8ab6729820a3" Namespace="calico-system" Pod="csi-node-driver-x2vsr" WorkloadEndpoint="ci--3510--3--2--b--76a749f546.novalocal-k8s-csi--node--driver--x2vsr-eth0" Feb 9 19:28:35.952000 audit[5089]: NETFILTER_CFG table=filter:134 family=2 entries=42 op=nft_register_chain pid=5089 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:28:35.952000 audit[5089]: SYSCALL arch=c000003e syscall=46 success=yes exit=20696 a0=3 a1=7ffd05ee8d90 a2=0 a3=7ffd05ee8d7c items=0 ppid=4364 pid=5089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:35.952000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:28:35.988557 env[1135]: time="2024-02-09T19:28:35.988495071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:28:35.988735 env[1135]: time="2024-02-09T19:28:35.988710706Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:28:35.988834 env[1135]: time="2024-02-09T19:28:35.988811937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:28:35.989137 env[1135]: time="2024-02-09T19:28:35.989107641Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5c25219bb93d88f78ee9f3505599e366f70593133d36cbee9c3a8ab6729820a3 pid=5102 runtime=io.containerd.runc.v2 Feb 9 19:28:36.061971 env[1135]: time="2024-02-09T19:28:36.061927674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x2vsr,Uid:258b5f6f-f507-494e-8282-83a91907d3f5,Namespace:calico-system,Attempt:1,} returns sandbox id \"5c25219bb93d88f78ee9f3505599e366f70593133d36cbee9c3a8ab6729820a3\"" Feb 9 19:28:37.475693 kernel: kauditd_printk_skb: 14 callbacks suppressed Feb 9 19:28:37.479142 kernel: audit: type=1130 audit(1707506917.469:414): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.24.4.217:22-172.24.4.1:41822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:37.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.24.4.217:22-172.24.4.1:41822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:37.470283 systemd[1]: Started sshd@16-172.24.4.217:22-172.24.4.1:41822.service. Feb 9 19:28:37.477544 systemd-networkd[1018]: cali3f586c8501f: Gained IPv6LL Feb 9 19:28:37.675179 env[1135]: time="2024-02-09T19:28:37.675108870Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:37.679229 env[1135]: time="2024-02-09T19:28:37.679182696Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4e87edec0297dadd6f3bb25b2f540fd40e2abed9fff582c97ff4cd751d3f9803,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:37.683675 env[1135]: time="2024-02-09T19:28:37.683616229Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:37.688799 env[1135]: time="2024-02-09T19:28:37.688734146Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:e264ab1fb2f1ae90dd1d84e226d11d2eb4350e74ac27de4c65f29f5aadba5bb1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:37.693133 env[1135]: time="2024-02-09T19:28:37.693045970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.27.0\" returns image reference \"sha256:4e87edec0297dadd6f3bb25b2f540fd40e2abed9fff582c97ff4cd751d3f9803\"" Feb 9 19:28:37.698145 env[1135]: time="2024-02-09T19:28:37.695587360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\"" Feb 9 19:28:37.714325 env[1135]: time="2024-02-09T19:28:37.714288935Z" level=info msg="CreateContainer within sandbox \"4d46a223ed749a71899b388f2fe587f04a77aa9ea8cc708a3a058342cc5b9a43\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 9 19:28:37.747221 env[1135]: time="2024-02-09T19:28:37.747086790Z" level=info msg="CreateContainer within sandbox \"4d46a223ed749a71899b388f2fe587f04a77aa9ea8cc708a3a058342cc5b9a43\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8066a266be97154c9ecf7b7e04669e5b85203d3a84e76a3138f95ed24eb7f0ad\"" Feb 9 19:28:37.748377 env[1135]: time="2024-02-09T19:28:37.748241468Z" level=info msg="StartContainer for \"8066a266be97154c9ecf7b7e04669e5b85203d3a84e76a3138f95ed24eb7f0ad\"" Feb 9 19:28:37.833535 env[1135]: time="2024-02-09T19:28:37.833473468Z" level=info msg="StartContainer for \"8066a266be97154c9ecf7b7e04669e5b85203d3a84e76a3138f95ed24eb7f0ad\" returns successfully" Feb 9 19:28:38.399176 kubelet[2109]: I0209 19:28:38.399140 2109 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6f585564b5-stdvx" podStartSLOduration=-9.223371924455713e+09 pod.CreationTimestamp="2024-02-09 19:26:46 +0000 UTC" firstStartedPulling="2024-02-09 19:28:32.240841184 +0000 UTC m=+127.932251326" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:28:38.284738348 +0000 UTC m=+133.976148510" watchObservedRunningTime="2024-02-09 19:28:38.399062275 +0000 UTC m=+134.090472417" Feb 9 19:28:39.083000 audit[5146]: USER_ACCT pid=5146 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:39.096599 sshd[5146]: Accepted publickey for core from 172.24.4.1 port 41822 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:28:39.097292 kernel: audit: type=1101 audit(1707506919.083:415): pid=5146 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:39.085000 audit[5146]: CRED_ACQ pid=5146 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:39.131743 kernel: audit: type=1103 audit(1707506919.085:416): pid=5146 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:39.171559 kernel: audit: type=1006 audit(1707506919.085:417): pid=5146 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Feb 9 19:28:39.171691 kernel: audit: type=1300 audit(1707506919.085:417): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd9ba92d40 a2=3 a3=0 items=0 ppid=1 pid=5146 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:39.171748 kernel: audit: type=1327 audit(1707506919.085:417): proctitle=737368643A20636F7265205B707269765D Feb 9 19:28:39.085000 audit[5146]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd9ba92d40 a2=3 a3=0 items=0 ppid=1 pid=5146 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:39.085000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:28:39.131013 sshd[5146]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:28:39.177599 systemd[1]: Started session-17.scope. Feb 9 19:28:39.179676 systemd-logind[1122]: New session 17 of user core. Feb 9 19:28:39.186000 audit[5146]: USER_START pid=5146 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:39.200114 kernel: audit: type=1105 audit(1707506919.186:418): pid=5146 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:39.187000 audit[5206]: CRED_ACQ pid=5206 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:39.209920 kernel: audit: type=1103 audit(1707506919.187:419): pid=5206 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:40.384847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3645338768.mount: Deactivated successfully. Feb 9 19:28:41.001430 sshd[5146]: pam_unix(sshd:session): session closed for user core Feb 9 19:28:41.003000 audit[5146]: USER_END pid=5146 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:41.011024 kernel: audit: type=1106 audit(1707506921.003:420): pid=5146 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:41.010962 systemd[1]: sshd@16-172.24.4.217:22-172.24.4.1:41822.service: Deactivated successfully. Feb 9 19:28:41.004000 audit[5146]: CRED_DISP pid=5146 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:41.017336 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 19:28:41.018012 kernel: audit: type=1104 audit(1707506921.004:421): pid=5146 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:41.018304 systemd-logind[1122]: Session 17 logged out. Waiting for processes to exit. Feb 9 19:28:41.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.24.4.217:22-172.24.4.1:41822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:41.025659 systemd-logind[1122]: Removed session 17. Feb 9 19:28:41.230149 env[1135]: time="2024-02-09T19:28:41.230036500Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:41.232702 env[1135]: time="2024-02-09T19:28:41.232647311Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:91c1c91da7602f16686c149419195b486669f3a1828fd320cf332fdc6a25297d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:41.251499 env[1135]: time="2024-02-09T19:28:41.251440436Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:41.259286 env[1135]: time="2024-02-09T19:28:41.258253575Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:2b9021393c17e87ba8a3c89f5b3719941812f4e4751caa0b71eb2233bff48738,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:41.259707 env[1135]: time="2024-02-09T19:28:41.259662600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\" returns image reference \"sha256:91c1c91da7602f16686c149419195b486669f3a1828fd320cf332fdc6a25297d\"" Feb 9 19:28:41.266784 env[1135]: time="2024-02-09T19:28:41.266720338Z" level=info msg="CreateContainer within sandbox \"5c25219bb93d88f78ee9f3505599e366f70593133d36cbee9c3a8ab6729820a3\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 9 19:28:41.297164 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3505321571.mount: Deactivated successfully. Feb 9 19:28:41.318188 env[1135]: time="2024-02-09T19:28:41.318105510Z" level=info msg="CreateContainer within sandbox \"5c25219bb93d88f78ee9f3505599e366f70593133d36cbee9c3a8ab6729820a3\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"be36cb2cc6d49226bd5769780d2a3c96100f7e2a34b537d7b3a945b0205863ee\"" Feb 9 19:28:41.320785 env[1135]: time="2024-02-09T19:28:41.320745165Z" level=info msg="StartContainer for \"be36cb2cc6d49226bd5769780d2a3c96100f7e2a34b537d7b3a945b0205863ee\"" Feb 9 19:28:41.583397 env[1135]: time="2024-02-09T19:28:41.582738602Z" level=info msg="StartContainer for \"be36cb2cc6d49226bd5769780d2a3c96100f7e2a34b537d7b3a945b0205863ee\" returns successfully" Feb 9 19:28:41.586413 env[1135]: time="2024-02-09T19:28:41.586334763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\"" Feb 9 19:28:44.150139 env[1135]: time="2024-02-09T19:28:44.150100553Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:44.154452 env[1135]: time="2024-02-09T19:28:44.154393891Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d36ef67f7b24c4facd86d0bc06b0cd907431a822dee695eb06b86a905bff85d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:44.156956 env[1135]: time="2024-02-09T19:28:44.156931154Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:44.159813 env[1135]: time="2024-02-09T19:28:44.159789469Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:45a7aba6020a7cf7b866cb8a8d481b30c97e9b3407e1459aaa65a5b4cc06633a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:44.161247 env[1135]: time="2024-02-09T19:28:44.161186461Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\" returns image reference \"sha256:d36ef67f7b24c4facd86d0bc06b0cd907431a822dee695eb06b86a905bff85d4\"" Feb 9 19:28:44.163548 env[1135]: time="2024-02-09T19:28:44.163510013Z" level=info msg="CreateContainer within sandbox \"5c25219bb93d88f78ee9f3505599e366f70593133d36cbee9c3a8ab6729820a3\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 9 19:28:44.199864 env[1135]: time="2024-02-09T19:28:44.199821750Z" level=info msg="CreateContainer within sandbox \"5c25219bb93d88f78ee9f3505599e366f70593133d36cbee9c3a8ab6729820a3\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"08758627ec651d1614cf3c125d1d6ea7b9765163473ac433e358b635c5951a03\"" Feb 9 19:28:44.201068 env[1135]: time="2024-02-09T19:28:44.201016673Z" level=info msg="StartContainer for \"08758627ec651d1614cf3c125d1d6ea7b9765163473ac433e358b635c5951a03\"" Feb 9 19:28:44.282102 env[1135]: time="2024-02-09T19:28:44.282039645Z" level=info msg="StartContainer for \"08758627ec651d1614cf3c125d1d6ea7b9765163473ac433e358b635c5951a03\" returns successfully" Feb 9 19:28:44.706979 kubelet[2109]: I0209 19:28:44.706934 2109 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 9 19:28:44.708692 kubelet[2109]: I0209 19:28:44.708664 2109 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 9 19:28:45.302807 kubelet[2109]: I0209 19:28:45.302682 2109 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-x2vsr" podStartSLOduration=-9.223371916552223e+09 pod.CreationTimestamp="2024-02-09 19:26:45 +0000 UTC" firstStartedPulling="2024-02-09 19:28:36.063134259 +0000 UTC m=+131.754544401" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:28:45.301629591 +0000 UTC m=+140.993039783" watchObservedRunningTime="2024-02-09 19:28:45.302552634 +0000 UTC m=+140.993962857" Feb 9 19:28:46.002294 systemd[1]: Started sshd@17-172.24.4.217:22-172.24.4.1:59718.service. Feb 9 19:28:46.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.24.4.217:22-172.24.4.1:59718 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:46.009001 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:28:46.009057 kernel: audit: type=1130 audit(1707506926.003:423): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.24.4.217:22-172.24.4.1:59718 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:47.483000 audit[5313]: USER_ACCT pid=5313 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:47.484745 sshd[5313]: Accepted publickey for core from 172.24.4.1 port 59718 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:28:47.489954 kernel: audit: type=1101 audit(1707506927.483:424): pid=5313 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:47.489000 audit[5313]: CRED_ACQ pid=5313 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:47.492456 sshd[5313]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:28:47.498768 kernel: audit: type=1103 audit(1707506927.489:425): pid=5313 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:47.499974 kernel: audit: type=1006 audit(1707506927.490:426): pid=5313 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Feb 9 19:28:47.500076 kernel: audit: type=1300 audit(1707506927.490:426): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff475aa5c0 a2=3 a3=0 items=0 ppid=1 pid=5313 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:47.490000 audit[5313]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff475aa5c0 a2=3 a3=0 items=0 ppid=1 pid=5313 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:47.505531 kernel: audit: type=1327 audit(1707506927.490:426): proctitle=737368643A20636F7265205B707269765D Feb 9 19:28:47.490000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:28:47.512064 systemd[1]: Started session-18.scope. Feb 9 19:28:47.512545 systemd-logind[1122]: New session 18 of user core. Feb 9 19:28:47.520000 audit[5313]: USER_START pid=5313 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:47.526000 audit[5324]: CRED_ACQ pid=5324 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:47.532396 kernel: audit: type=1105 audit(1707506927.520:427): pid=5313 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:47.532526 kernel: audit: type=1103 audit(1707506927.526:428): pid=5324 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:48.839845 sshd[5313]: pam_unix(sshd:session): session closed for user core Feb 9 19:28:48.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.24.4.217:22-172.24.4.1:59726 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:48.841730 systemd[1]: Started sshd@18-172.24.4.217:22-172.24.4.1:59726.service. Feb 9 19:28:48.851788 kernel: audit: type=1130 audit(1707506928.840:429): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.24.4.217:22-172.24.4.1:59726 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:48.851864 kernel: audit: type=1106 audit(1707506928.846:430): pid=5313 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:48.846000 audit[5313]: USER_END pid=5313 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:48.853130 systemd[1]: sshd@17-172.24.4.217:22-172.24.4.1:59718.service: Deactivated successfully. Feb 9 19:28:48.855193 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 19:28:48.855580 systemd-logind[1122]: Session 18 logged out. Waiting for processes to exit. Feb 9 19:28:48.846000 audit[5313]: CRED_DISP pid=5313 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:48.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.24.4.217:22-172.24.4.1:59718 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:48.858964 systemd-logind[1122]: Removed session 18. Feb 9 19:28:50.219734 sshd[5331]: Accepted publickey for core from 172.24.4.1 port 59726 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:28:50.218000 audit[5331]: USER_ACCT pid=5331 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:50.220000 audit[5331]: CRED_ACQ pid=5331 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:50.220000 audit[5331]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff9349eb20 a2=3 a3=0 items=0 ppid=1 pid=5331 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:50.220000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:28:50.222768 sshd[5331]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:28:50.237580 systemd-logind[1122]: New session 19 of user core. Feb 9 19:28:50.238733 systemd[1]: Started session-19.scope. Feb 9 19:28:50.251000 audit[5331]: USER_START pid=5331 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:50.255000 audit[5336]: CRED_ACQ pid=5336 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:51.704463 sshd[5331]: pam_unix(sshd:session): session closed for user core Feb 9 19:28:51.705119 systemd[1]: Started sshd@19-172.24.4.217:22-172.24.4.1:59728.service. Feb 9 19:28:51.715338 kernel: kauditd_printk_skb: 9 callbacks suppressed Feb 9 19:28:51.715454 kernel: audit: type=1106 audit(1707506931.705:438): pid=5331 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:51.705000 audit[5331]: USER_END pid=5331 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:51.716146 systemd[1]: sshd@18-172.24.4.217:22-172.24.4.1:59726.service: Deactivated successfully. Feb 9 19:28:51.705000 audit[5331]: CRED_DISP pid=5331 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:51.724131 kernel: audit: type=1104 audit(1707506931.705:439): pid=5331 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:51.724759 systemd-logind[1122]: Session 19 logged out. Waiting for processes to exit. Feb 9 19:28:51.724877 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 19:28:51.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.24.4.217:22-172.24.4.1:59728 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:51.732069 kernel: audit: type=1130 audit(1707506931.714:440): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.24.4.217:22-172.24.4.1:59728 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:51.731759 systemd-logind[1122]: Removed session 19. Feb 9 19:28:51.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.24.4.217:22-172.24.4.1:59726 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:51.738104 kernel: audit: type=1131 audit(1707506931.715:441): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.24.4.217:22-172.24.4.1:59726 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:53.276000 audit[5342]: USER_ACCT pid=5342 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:53.291013 kernel: audit: type=1101 audit(1707506933.276:442): pid=5342 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:53.291119 sshd[5342]: Accepted publickey for core from 172.24.4.1 port 59728 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:28:53.305362 kernel: audit: type=1103 audit(1707506933.290:443): pid=5342 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:53.290000 audit[5342]: CRED_ACQ pid=5342 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:53.292925 sshd[5342]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:28:53.315985 kernel: audit: type=1006 audit(1707506933.290:444): pid=5342 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Feb 9 19:28:53.290000 audit[5342]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffee0e7f450 a2=3 a3=0 items=0 ppid=1 pid=5342 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:53.329956 kernel: audit: type=1300 audit(1707506933.290:444): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffee0e7f450 a2=3 a3=0 items=0 ppid=1 pid=5342 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:53.290000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:28:53.335977 kernel: audit: type=1327 audit(1707506933.290:444): proctitle=737368643A20636F7265205B707269765D Feb 9 19:28:53.341571 systemd-logind[1122]: New session 20 of user core. Feb 9 19:28:53.342971 systemd[1]: Started session-20.scope. Feb 9 19:28:53.355000 audit[5342]: USER_START pid=5342 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:53.372297 kernel: audit: type=1105 audit(1707506933.355:445): pid=5342 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:53.372000 audit[5347]: CRED_ACQ pid=5347 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:56.037000 audit[5383]: NETFILTER_CFG table=filter:135 family=2 entries=18 op=nft_register_rule pid=5383 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:28:56.037000 audit[5383]: SYSCALL arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7fffeb0ed410 a2=0 a3=7fffeb0ed3fc items=0 ppid=2269 pid=5383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:56.037000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:28:56.039000 audit[5383]: NETFILTER_CFG table=nat:136 family=2 entries=78 op=nft_register_rule pid=5383 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:28:56.039000 audit[5383]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7fffeb0ed410 a2=0 a3=7fffeb0ed3fc items=0 ppid=2269 pid=5383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:56.039000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:28:56.092000 audit[5409]: NETFILTER_CFG table=filter:137 family=2 entries=30 op=nft_register_rule pid=5409 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:28:56.092000 audit[5409]: SYSCALL arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7ffd28d5f720 a2=0 a3=7ffd28d5f70c items=0 ppid=2269 pid=5409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:56.092000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:28:56.094000 audit[5409]: NETFILTER_CFG table=nat:138 family=2 entries=78 op=nft_register_rule pid=5409 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:28:56.094000 audit[5409]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffd28d5f720 a2=0 a3=7ffd28d5f70c items=0 ppid=2269 pid=5409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:56.094000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:28:56.269962 sshd[5342]: pam_unix(sshd:session): session closed for user core Feb 9 19:28:56.276000 audit[5342]: USER_END pid=5342 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:56.276000 audit[5342]: CRED_DISP pid=5342 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:56.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.24.4.217:22-172.24.4.1:34880 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:56.278711 systemd[1]: Started sshd@20-172.24.4.217:22-172.24.4.1:34880.service. Feb 9 19:28:56.287082 systemd[1]: sshd@19-172.24.4.217:22-172.24.4.1:59728.service: Deactivated successfully. Feb 9 19:28:56.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.24.4.217:22-172.24.4.1:59728 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:56.290426 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 19:28:56.291610 systemd-logind[1122]: Session 20 logged out. Waiting for processes to exit. Feb 9 19:28:56.296190 systemd-logind[1122]: Removed session 20. Feb 9 19:28:57.842581 kernel: kauditd_printk_skb: 17 callbacks suppressed Feb 9 19:28:57.842803 kernel: audit: type=1101 audit(1707506937.836:455): pid=5410 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:57.836000 audit[5410]: USER_ACCT pid=5410 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:57.840668 sshd[5410]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:28:57.843655 sshd[5410]: Accepted publickey for core from 172.24.4.1 port 34880 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:28:57.838000 audit[5410]: CRED_ACQ pid=5410 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:57.876353 kernel: audit: type=1103 audit(1707506937.838:456): pid=5410 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:57.876545 kernel: audit: type=1006 audit(1707506937.839:457): pid=5410 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Feb 9 19:28:57.839000 audit[5410]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd0568a650 a2=3 a3=0 items=0 ppid=1 pid=5410 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:57.890558 kernel: audit: type=1300 audit(1707506937.839:457): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd0568a650 a2=3 a3=0 items=0 ppid=1 pid=5410 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:28:57.890697 kernel: audit: type=1327 audit(1707506937.839:457): proctitle=737368643A20636F7265205B707269765D Feb 9 19:28:57.839000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:28:57.899295 systemd-logind[1122]: New session 21 of user core. Feb 9 19:28:57.900546 systemd[1]: Started session-21.scope. Feb 9 19:28:57.914000 audit[5410]: USER_START pid=5410 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:57.928385 kernel: audit: type=1105 audit(1707506937.914:458): pid=5410 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:57.928000 audit[5416]: CRED_ACQ pid=5416 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:57.994633 kernel: audit: type=1103 audit(1707506937.928:459): pid=5416 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:28:59.130660 systemd[1]: run-containerd-runc-k8s.io-7dab9bb23639f8dd1be0275750646ed8b5e56b941d0d53153d98176fac0b94f4-runc.qMpIjW.mount: Deactivated successfully. Feb 9 19:29:00.423777 sshd[5410]: pam_unix(sshd:session): session closed for user core Feb 9 19:29:00.424000 audit[5410]: USER_END pid=5410 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:00.432090 systemd[1]: Started sshd@21-172.24.4.217:22-172.24.4.1:34890.service. Feb 9 19:29:00.424000 audit[5410]: CRED_DISP pid=5410 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:00.438870 systemd[1]: sshd@20-172.24.4.217:22-172.24.4.1:34880.service: Deactivated successfully. Feb 9 19:29:00.439745 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 19:29:00.441621 systemd-logind[1122]: Session 21 logged out. Waiting for processes to exit. Feb 9 19:29:00.443041 systemd-logind[1122]: Removed session 21. Feb 9 19:29:00.447434 kernel: audit: type=1106 audit(1707506940.424:460): pid=5410 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:00.447542 kernel: audit: type=1104 audit(1707506940.424:461): pid=5410 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:00.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.24.4.217:22-172.24.4.1:34890 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:29:00.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.24.4.217:22-172.24.4.1:34880 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:29:00.461949 kernel: audit: type=1130 audit(1707506940.430:462): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.24.4.217:22-172.24.4.1:34890 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:29:01.853000 audit[5443]: USER_ACCT pid=5443 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:01.855023 sshd[5443]: Accepted publickey for core from 172.24.4.1 port 34890 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:29:01.856000 audit[5443]: CRED_ACQ pid=5443 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:01.856000 audit[5443]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe0eb6dfc0 a2=3 a3=0 items=0 ppid=1 pid=5443 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:01.856000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:29:01.858870 sshd[5443]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:29:01.872227 systemd[1]: Started session-22.scope. Feb 9 19:29:01.872695 systemd-logind[1122]: New session 22 of user core. Feb 9 19:29:01.884000 audit[5443]: USER_START pid=5443 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:01.888000 audit[5448]: CRED_ACQ pid=5448 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:02.866217 sshd[5443]: pam_unix(sshd:session): session closed for user core Feb 9 19:29:02.867000 audit[5443]: USER_END pid=5443 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:02.871315 kernel: kauditd_printk_skb: 8 callbacks suppressed Feb 9 19:29:02.871399 kernel: audit: type=1106 audit(1707506942.867:469): pid=5443 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:02.873175 systemd-logind[1122]: Session 22 logged out. Waiting for processes to exit. Feb 9 19:29:02.875202 systemd[1]: sshd@21-172.24.4.217:22-172.24.4.1:34890.service: Deactivated successfully. Feb 9 19:29:02.877447 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 19:29:02.881705 systemd-logind[1122]: Removed session 22. Feb 9 19:29:02.868000 audit[5443]: CRED_DISP pid=5443 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:02.892966 kernel: audit: type=1104 audit(1707506942.868:470): pid=5443 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:02.893186 kernel: audit: type=1131 audit(1707506942.875:471): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.24.4.217:22-172.24.4.1:34890 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:29:02.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.24.4.217:22-172.24.4.1:34890 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:29:05.437884 kubelet[2109]: I0209 19:29:05.437825 2109 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:29:05.456000 audit[5482]: NETFILTER_CFG table=filter:139 family=2 entries=31 op=nft_register_rule pid=5482 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:05.456000 audit[5482]: SYSCALL arch=c000003e syscall=46 success=yes exit=11068 a0=3 a1=7ffc737f60f0 a2=0 a3=7ffc737f60dc items=0 ppid=2269 pid=5482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:05.466999 kernel: audit: type=1325 audit(1707506945.456:472): table=filter:139 family=2 entries=31 op=nft_register_rule pid=5482 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:05.467111 kernel: audit: type=1300 audit(1707506945.456:472): arch=c000003e syscall=46 success=yes exit=11068 a0=3 a1=7ffc737f60f0 a2=0 a3=7ffc737f60dc items=0 ppid=2269 pid=5482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:05.456000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:29:05.468000 audit[5482]: NETFILTER_CFG table=nat:140 family=2 entries=78 op=nft_register_rule pid=5482 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:05.476087 kernel: audit: type=1327 audit(1707506945.456:472): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:29:05.476181 kernel: audit: type=1325 audit(1707506945.468:473): table=nat:140 family=2 entries=78 op=nft_register_rule pid=5482 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:05.468000 audit[5482]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffc737f60f0 a2=0 a3=7ffc737f60dc items=0 ppid=2269 pid=5482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:05.468000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:29:05.488649 kernel: audit: type=1300 audit(1707506945.468:473): arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffc737f60f0 a2=0 a3=7ffc737f60dc items=0 ppid=2269 pid=5482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:05.488742 kernel: audit: type=1327 audit(1707506945.468:473): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:29:05.542000 audit[5508]: NETFILTER_CFG table=filter:141 family=2 entries=32 op=nft_register_rule pid=5508 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:05.542000 audit[5508]: SYSCALL arch=c000003e syscall=46 success=yes exit=11068 a0=3 a1=7ffd384797f0 a2=0 a3=7ffd384797dc items=0 ppid=2269 pid=5508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:05.542000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:29:05.546924 kernel: audit: type=1325 audit(1707506945.542:474): table=filter:141 family=2 entries=32 op=nft_register_rule pid=5508 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:05.542000 audit[5508]: NETFILTER_CFG table=nat:142 family=2 entries=78 op=nft_register_rule pid=5508 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:05.542000 audit[5508]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffd384797f0 a2=0 a3=7ffd384797dc items=0 ppid=2269 pid=5508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:05.542000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:29:05.617177 kubelet[2109]: I0209 19:29:05.617145 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r86b\" (UniqueName: \"kubernetes.io/projected/0e39f94d-a9f5-478b-8108-72a57c47e37b-kube-api-access-8r86b\") pod \"calico-apiserver-7f7b7cdf76-v4rjq\" (UID: \"0e39f94d-a9f5-478b-8108-72a57c47e37b\") " pod="calico-apiserver/calico-apiserver-7f7b7cdf76-v4rjq" Feb 9 19:29:05.640794 kubelet[2109]: I0209 19:29:05.640769 2109 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0e39f94d-a9f5-478b-8108-72a57c47e37b-calico-apiserver-certs\") pod \"calico-apiserver-7f7b7cdf76-v4rjq\" (UID: \"0e39f94d-a9f5-478b-8108-72a57c47e37b\") " pod="calico-apiserver/calico-apiserver-7f7b7cdf76-v4rjq" Feb 9 19:29:06.048205 env[1135]: time="2024-02-09T19:29:06.048099158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f7b7cdf76-v4rjq,Uid:0e39f94d-a9f5-478b-8108-72a57c47e37b,Namespace:calico-apiserver,Attempt:0,}" Feb 9 19:29:06.667000 audit[5551]: NETFILTER_CFG table=filter:143 family=2 entries=20 op=nft_register_rule pid=5551 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:06.667000 audit[5551]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffce576c8c0 a2=0 a3=7ffce576c8ac items=0 ppid=2269 pid=5551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:06.667000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:29:06.672000 audit[5551]: NETFILTER_CFG table=nat:144 family=2 entries=162 op=nft_register_chain pid=5551 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:06.672000 audit[5551]: SYSCALL arch=c000003e syscall=46 success=yes exit=66940 a0=3 a1=7ffce576c8c0 a2=0 a3=7ffce576c8ac items=0 ppid=2269 pid=5551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:06.672000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:29:06.769209 systemd-networkd[1018]: calic3f84385178: Link UP Feb 9 19:29:06.773490 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:29:06.773570 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calic3f84385178: link becomes ready Feb 9 19:29:06.773522 systemd-networkd[1018]: calic3f84385178: Gained carrier Feb 9 19:29:06.792112 env[1135]: 2024-02-09 19:29:06.622 [INFO][5512] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--2--b--76a749f546.novalocal-k8s-calico--apiserver--7f7b7cdf76--v4rjq-eth0 calico-apiserver-7f7b7cdf76- calico-apiserver 0e39f94d-a9f5-478b-8108-72a57c47e37b 1289 0 2024-02-09 19:29:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f7b7cdf76 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510-3-2-b-76a749f546.novalocal calico-apiserver-7f7b7cdf76-v4rjq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic3f84385178 [] []}} ContainerID="d4d429c2cebb0dd4b0e2e25c042243b1372059edf3842905109a37631ad7a82b" Namespace="calico-apiserver" Pod="calico-apiserver-7f7b7cdf76-v4rjq" WorkloadEndpoint="ci--3510--3--2--b--76a749f546.novalocal-k8s-calico--apiserver--7f7b7cdf76--v4rjq-" Feb 9 19:29:06.792112 env[1135]: 2024-02-09 19:29:06.625 [INFO][5512] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="d4d429c2cebb0dd4b0e2e25c042243b1372059edf3842905109a37631ad7a82b" Namespace="calico-apiserver" Pod="calico-apiserver-7f7b7cdf76-v4rjq" WorkloadEndpoint="ci--3510--3--2--b--76a749f546.novalocal-k8s-calico--apiserver--7f7b7cdf76--v4rjq-eth0" Feb 9 19:29:06.792112 env[1135]: 2024-02-09 19:29:06.711 [INFO][5552] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d4d429c2cebb0dd4b0e2e25c042243b1372059edf3842905109a37631ad7a82b" HandleID="k8s-pod-network.d4d429c2cebb0dd4b0e2e25c042243b1372059edf3842905109a37631ad7a82b" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-calico--apiserver--7f7b7cdf76--v4rjq-eth0" Feb 9 19:29:06.792112 env[1135]: 2024-02-09 19:29:06.722 [INFO][5552] ipam_plugin.go 268: Auto assigning IP ContainerID="d4d429c2cebb0dd4b0e2e25c042243b1372059edf3842905109a37631ad7a82b" HandleID="k8s-pod-network.d4d429c2cebb0dd4b0e2e25c042243b1372059edf3842905109a37631ad7a82b" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-calico--apiserver--7f7b7cdf76--v4rjq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051800), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510-3-2-b-76a749f546.novalocal", "pod":"calico-apiserver-7f7b7cdf76-v4rjq", "timestamp":"2024-02-09 19:29:06.711317837 +0000 UTC"}, Hostname:"ci-3510-3-2-b-76a749f546.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:29:06.792112 env[1135]: 2024-02-09 19:29:06.723 [INFO][5552] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:29:06.792112 env[1135]: 2024-02-09 19:29:06.723 [INFO][5552] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:29:06.792112 env[1135]: 2024-02-09 19:29:06.723 [INFO][5552] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-2-b-76a749f546.novalocal' Feb 9 19:29:06.792112 env[1135]: 2024-02-09 19:29:06.724 [INFO][5552] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d4d429c2cebb0dd4b0e2e25c042243b1372059edf3842905109a37631ad7a82b" host="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:29:06.792112 env[1135]: 2024-02-09 19:29:06.729 [INFO][5552] ipam.go 372: Looking up existing affinities for host host="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:29:06.792112 env[1135]: 2024-02-09 19:29:06.735 [INFO][5552] ipam.go 489: Trying affinity for 192.168.2.64/26 host="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:29:06.792112 env[1135]: 2024-02-09 19:29:06.737 [INFO][5552] ipam.go 155: Attempting to load block cidr=192.168.2.64/26 host="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:29:06.792112 env[1135]: 2024-02-09 19:29:06.741 [INFO][5552] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.2.64/26 host="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:29:06.792112 env[1135]: 2024-02-09 19:29:06.741 [INFO][5552] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.2.64/26 handle="k8s-pod-network.d4d429c2cebb0dd4b0e2e25c042243b1372059edf3842905109a37631ad7a82b" host="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:29:06.792112 env[1135]: 2024-02-09 19:29:06.742 [INFO][5552] ipam.go 1682: Creating new handle: k8s-pod-network.d4d429c2cebb0dd4b0e2e25c042243b1372059edf3842905109a37631ad7a82b Feb 9 19:29:06.792112 env[1135]: 2024-02-09 19:29:06.747 [INFO][5552] ipam.go 1203: Writing block in order to claim IPs block=192.168.2.64/26 handle="k8s-pod-network.d4d429c2cebb0dd4b0e2e25c042243b1372059edf3842905109a37631ad7a82b" host="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:29:06.792112 env[1135]: 2024-02-09 19:29:06.759 [INFO][5552] ipam.go 1216: Successfully claimed IPs: [192.168.2.69/26] block=192.168.2.64/26 handle="k8s-pod-network.d4d429c2cebb0dd4b0e2e25c042243b1372059edf3842905109a37631ad7a82b" host="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:29:06.792112 env[1135]: 2024-02-09 19:29:06.760 [INFO][5552] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.2.69/26] handle="k8s-pod-network.d4d429c2cebb0dd4b0e2e25c042243b1372059edf3842905109a37631ad7a82b" host="ci-3510-3-2-b-76a749f546.novalocal" Feb 9 19:29:06.792112 env[1135]: 2024-02-09 19:29:06.760 [INFO][5552] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:29:06.792112 env[1135]: 2024-02-09 19:29:06.760 [INFO][5552] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.2.69/26] IPv6=[] ContainerID="d4d429c2cebb0dd4b0e2e25c042243b1372059edf3842905109a37631ad7a82b" HandleID="k8s-pod-network.d4d429c2cebb0dd4b0e2e25c042243b1372059edf3842905109a37631ad7a82b" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-calico--apiserver--7f7b7cdf76--v4rjq-eth0" Feb 9 19:29:06.799425 env[1135]: 2024-02-09 19:29:06.762 [INFO][5512] k8s.go 385: Populated endpoint ContainerID="d4d429c2cebb0dd4b0e2e25c042243b1372059edf3842905109a37631ad7a82b" Namespace="calico-apiserver" Pod="calico-apiserver-7f7b7cdf76-v4rjq" WorkloadEndpoint="ci--3510--3--2--b--76a749f546.novalocal-k8s-calico--apiserver--7f7b7cdf76--v4rjq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--b--76a749f546.novalocal-k8s-calico--apiserver--7f7b7cdf76--v4rjq-eth0", GenerateName:"calico-apiserver-7f7b7cdf76-", Namespace:"calico-apiserver", SelfLink:"", UID:"0e39f94d-a9f5-478b-8108-72a57c47e37b", ResourceVersion:"1289", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 29, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f7b7cdf76", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-b-76a749f546.novalocal", ContainerID:"", Pod:"calico-apiserver-7f7b7cdf76-v4rjq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic3f84385178", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:29:06.799425 env[1135]: 2024-02-09 19:29:06.762 [INFO][5512] k8s.go 386: Calico CNI using IPs: [192.168.2.69/32] ContainerID="d4d429c2cebb0dd4b0e2e25c042243b1372059edf3842905109a37631ad7a82b" Namespace="calico-apiserver" Pod="calico-apiserver-7f7b7cdf76-v4rjq" WorkloadEndpoint="ci--3510--3--2--b--76a749f546.novalocal-k8s-calico--apiserver--7f7b7cdf76--v4rjq-eth0" Feb 9 19:29:06.799425 env[1135]: 2024-02-09 19:29:06.762 [INFO][5512] dataplane_linux.go 68: Setting the host side veth name to calic3f84385178 ContainerID="d4d429c2cebb0dd4b0e2e25c042243b1372059edf3842905109a37631ad7a82b" Namespace="calico-apiserver" Pod="calico-apiserver-7f7b7cdf76-v4rjq" WorkloadEndpoint="ci--3510--3--2--b--76a749f546.novalocal-k8s-calico--apiserver--7f7b7cdf76--v4rjq-eth0" Feb 9 19:29:06.799425 env[1135]: 2024-02-09 19:29:06.774 [INFO][5512] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="d4d429c2cebb0dd4b0e2e25c042243b1372059edf3842905109a37631ad7a82b" Namespace="calico-apiserver" Pod="calico-apiserver-7f7b7cdf76-v4rjq" WorkloadEndpoint="ci--3510--3--2--b--76a749f546.novalocal-k8s-calico--apiserver--7f7b7cdf76--v4rjq-eth0" Feb 9 19:29:06.799425 env[1135]: 2024-02-09 19:29:06.776 [INFO][5512] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="d4d429c2cebb0dd4b0e2e25c042243b1372059edf3842905109a37631ad7a82b" Namespace="calico-apiserver" Pod="calico-apiserver-7f7b7cdf76-v4rjq" WorkloadEndpoint="ci--3510--3--2--b--76a749f546.novalocal-k8s-calico--apiserver--7f7b7cdf76--v4rjq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--b--76a749f546.novalocal-k8s-calico--apiserver--7f7b7cdf76--v4rjq-eth0", GenerateName:"calico-apiserver-7f7b7cdf76-", Namespace:"calico-apiserver", SelfLink:"", UID:"0e39f94d-a9f5-478b-8108-72a57c47e37b", ResourceVersion:"1289", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 29, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f7b7cdf76", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-b-76a749f546.novalocal", ContainerID:"d4d429c2cebb0dd4b0e2e25c042243b1372059edf3842905109a37631ad7a82b", Pod:"calico-apiserver-7f7b7cdf76-v4rjq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic3f84385178", MAC:"32:d5:05:eb:09:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:29:06.799425 env[1135]: 2024-02-09 19:29:06.785 [INFO][5512] k8s.go 491: Wrote updated endpoint to datastore ContainerID="d4d429c2cebb0dd4b0e2e25c042243b1372059edf3842905109a37631ad7a82b" Namespace="calico-apiserver" Pod="calico-apiserver-7f7b7cdf76-v4rjq" WorkloadEndpoint="ci--3510--3--2--b--76a749f546.novalocal-k8s-calico--apiserver--7f7b7cdf76--v4rjq-eth0" Feb 9 19:29:06.821000 audit[5580]: NETFILTER_CFG table=filter:145 family=2 entries=59 op=nft_register_chain pid=5580 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:29:06.821000 audit[5580]: SYSCALL arch=c000003e syscall=46 success=yes exit=29292 a0=3 a1=7ffff9d84440 a2=0 a3=7ffff9d8442c items=0 ppid=4364 pid=5580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:06.821000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:29:06.836843 env[1135]: time="2024-02-09T19:29:06.836770211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:29:06.836843 env[1135]: time="2024-02-09T19:29:06.836810256Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:29:06.837067 env[1135]: time="2024-02-09T19:29:06.836824022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:29:06.837330 env[1135]: time="2024-02-09T19:29:06.837267294Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d4d429c2cebb0dd4b0e2e25c042243b1372059edf3842905109a37631ad7a82b pid=5589 runtime=io.containerd.runc.v2 Feb 9 19:29:06.865954 systemd[1]: run-containerd-runc-k8s.io-d4d429c2cebb0dd4b0e2e25c042243b1372059edf3842905109a37631ad7a82b-runc.Fpc3tx.mount: Deactivated successfully. Feb 9 19:29:06.920743 env[1135]: time="2024-02-09T19:29:06.919171284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f7b7cdf76-v4rjq,Uid:0e39f94d-a9f5-478b-8108-72a57c47e37b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d4d429c2cebb0dd4b0e2e25c042243b1372059edf3842905109a37631ad7a82b\"" Feb 9 19:29:06.922148 env[1135]: time="2024-02-09T19:29:06.922120189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\"" Feb 9 19:29:07.874101 systemd[1]: Started sshd@22-172.24.4.217:22-172.24.4.1:39078.service. Feb 9 19:29:07.881068 kernel: kauditd_printk_skb: 14 callbacks suppressed Feb 9 19:29:07.881198 kernel: audit: type=1130 audit(1707506947.873:479): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.24.4.217:22-172.24.4.1:39078 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:29:07.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.24.4.217:22-172.24.4.1:39078 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:29:08.492853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3512878821.mount: Deactivated successfully. Feb 9 19:29:08.516171 systemd[1]: run-containerd-runc-k8s.io-8066a266be97154c9ecf7b7e04669e5b85203d3a84e76a3138f95ed24eb7f0ad-runc.bNcFyD.mount: Deactivated successfully. Feb 9 19:29:08.768101 systemd-networkd[1018]: calic3f84385178: Gained IPv6LL Feb 9 19:29:09.309000 audit[5622]: USER_ACCT pid=5622 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:09.312106 sshd[5622]: Accepted publickey for core from 172.24.4.1 port 39078 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:29:09.313761 sshd[5622]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:29:09.315096 kernel: audit: type=1101 audit(1707506949.309:480): pid=5622 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:09.312000 audit[5622]: CRED_ACQ pid=5622 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:09.319947 kernel: audit: type=1103 audit(1707506949.312:481): pid=5622 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:09.323935 kernel: audit: type=1006 audit(1707506949.312:482): pid=5622 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Feb 9 19:29:09.327508 systemd-logind[1122]: New session 23 of user core. Feb 9 19:29:09.328551 systemd[1]: Started session-23.scope. Feb 9 19:29:09.312000 audit[5622]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd05e83130 a2=3 a3=0 items=0 ppid=1 pid=5622 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:09.337959 kernel: audit: type=1300 audit(1707506949.312:482): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd05e83130 a2=3 a3=0 items=0 ppid=1 pid=5622 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:09.312000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:29:09.351751 kernel: audit: type=1327 audit(1707506949.312:482): proctitle=737368643A20636F7265205B707269765D Feb 9 19:29:09.351000 audit[5622]: USER_START pid=5622 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:09.361696 kernel: audit: type=1105 audit(1707506949.351:483): pid=5622 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:09.361809 kernel: audit: type=1103 audit(1707506949.353:484): pid=5646 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:09.353000 audit[5646]: CRED_ACQ pid=5646 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:10.441675 sshd[5622]: pam_unix(sshd:session): session closed for user core Feb 9 19:29:10.446000 audit[5622]: USER_END pid=5622 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:10.462465 kernel: audit: type=1106 audit(1707506950.446:485): pid=5622 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:10.460524 systemd[1]: sshd@22-172.24.4.217:22-172.24.4.1:39078.service: Deactivated successfully. Feb 9 19:29:10.446000 audit[5622]: CRED_DISP pid=5622 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:10.462040 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 19:29:10.463175 systemd-logind[1122]: Session 23 logged out. Waiting for processes to exit. Feb 9 19:29:10.464252 systemd-logind[1122]: Removed session 23. Feb 9 19:29:10.473402 kernel: audit: type=1104 audit(1707506950.446:486): pid=5622 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:10.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.24.4.217:22-172.24.4.1:39078 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:29:11.385700 systemd[1]: run-containerd-runc-k8s.io-8066a266be97154c9ecf7b7e04669e5b85203d3a84e76a3138f95ed24eb7f0ad-runc.yYyexR.mount: Deactivated successfully. Feb 9 19:29:13.303705 env[1135]: time="2024-02-09T19:29:13.303529709Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:29:13.312328 env[1135]: time="2024-02-09T19:29:13.312245196Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:848c5b919e8d33dbad8c8c64aa6aec07c29cfe6e4f6312ceafc1641ea929f91a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:29:13.320542 env[1135]: time="2024-02-09T19:29:13.320452744Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:29:13.325109 env[1135]: time="2024-02-09T19:29:13.325041815Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:5ff0bdc8d0b2e9d7819703b18867f60f9153ed01da81e2bbfa22002abec9dc26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:29:13.329040 env[1135]: time="2024-02-09T19:29:13.327829485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\" returns image reference \"sha256:848c5b919e8d33dbad8c8c64aa6aec07c29cfe6e4f6312ceafc1641ea929f91a\"" Feb 9 19:29:13.336976 env[1135]: time="2024-02-09T19:29:13.336872711Z" level=info msg="CreateContainer within sandbox \"d4d429c2cebb0dd4b0e2e25c042243b1372059edf3842905109a37631ad7a82b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 9 19:29:13.380262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1526978834.mount: Deactivated successfully. Feb 9 19:29:13.389843 env[1135]: time="2024-02-09T19:29:13.389630622Z" level=info msg="CreateContainer within sandbox \"d4d429c2cebb0dd4b0e2e25c042243b1372059edf3842905109a37631ad7a82b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"37257eac6414535472a76009d1ced5f84dd88250f256518825d7d6b1be2306f2\"" Feb 9 19:29:13.392331 env[1135]: time="2024-02-09T19:29:13.392258801Z" level=info msg="StartContainer for \"37257eac6414535472a76009d1ced5f84dd88250f256518825d7d6b1be2306f2\"" Feb 9 19:29:13.437021 systemd[1]: run-containerd-runc-k8s.io-37257eac6414535472a76009d1ced5f84dd88250f256518825d7d6b1be2306f2-runc.eHqjt0.mount: Deactivated successfully. Feb 9 19:29:13.763778 env[1135]: time="2024-02-09T19:29:13.763711569Z" level=info msg="StartContainer for \"37257eac6414535472a76009d1ced5f84dd88250f256518825d7d6b1be2306f2\" returns successfully" Feb 9 19:29:14.103000 audit[5740]: NETFILTER_CFG table=filter:146 family=2 entries=8 op=nft_register_rule pid=5740 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:14.106171 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:29:14.106979 kernel: audit: type=1325 audit(1707506954.103:488): table=filter:146 family=2 entries=8 op=nft_register_rule pid=5740 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:14.103000 audit[5740]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7fff7d158620 a2=0 a3=7fff7d15860c items=0 ppid=2269 pid=5740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:14.114797 kernel: audit: type=1300 audit(1707506954.103:488): arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7fff7d158620 a2=0 a3=7fff7d15860c items=0 ppid=2269 pid=5740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:14.103000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:29:14.117931 kernel: audit: type=1327 audit(1707506954.103:488): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:29:14.118000 audit[5740]: NETFILTER_CFG table=nat:147 family=2 entries=198 op=nft_register_rule pid=5740 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:14.118000 audit[5740]: SYSCALL arch=c000003e syscall=46 success=yes exit=66940 a0=3 a1=7fff7d158620 a2=0 a3=7fff7d15860c items=0 ppid=2269 pid=5740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:14.128213 kernel: audit: type=1325 audit(1707506954.118:489): table=nat:147 family=2 entries=198 op=nft_register_rule pid=5740 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:14.128335 kernel: audit: type=1300 audit(1707506954.118:489): arch=c000003e syscall=46 success=yes exit=66940 a0=3 a1=7fff7d158620 a2=0 a3=7fff7d15860c items=0 ppid=2269 pid=5740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:14.128367 kernel: audit: type=1327 audit(1707506954.118:489): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:29:14.118000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:29:14.427023 kubelet[2109]: I0209 19:29:14.426868 2109 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7f7b7cdf76-v4rjq" podStartSLOduration=-9.223372027430275e+09 pod.CreationTimestamp="2024-02-09 19:29:05 +0000 UTC" firstStartedPulling="2024-02-09 19:29:06.921734554 +0000 UTC m=+162.613144696" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:29:14.416961009 +0000 UTC m=+170.108371261" watchObservedRunningTime="2024-02-09 19:29:14.424501467 +0000 UTC m=+170.115911690" Feb 9 19:29:14.503000 audit[5766]: NETFILTER_CFG table=filter:148 family=2 entries=8 op=nft_register_rule pid=5766 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:14.511948 kernel: audit: type=1325 audit(1707506954.503:490): table=filter:148 family=2 entries=8 op=nft_register_rule pid=5766 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:14.503000 audit[5766]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffd599795d0 a2=0 a3=7ffd599795bc items=0 ppid=2269 pid=5766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:14.503000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:29:14.531767 kernel: audit: type=1300 audit(1707506954.503:490): arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffd599795d0 a2=0 a3=7ffd599795bc items=0 ppid=2269 pid=5766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:14.531853 kernel: audit: type=1327 audit(1707506954.503:490): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:29:14.531881 kernel: audit: type=1325 audit(1707506954.507:491): table=nat:149 family=2 entries=198 op=nft_register_rule pid=5766 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:14.507000 audit[5766]: NETFILTER_CFG table=nat:149 family=2 entries=198 op=nft_register_rule pid=5766 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:14.507000 audit[5766]: SYSCALL arch=c000003e syscall=46 success=yes exit=66940 a0=3 a1=7ffd599795d0 a2=0 a3=7ffd599795bc items=0 ppid=2269 pid=5766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:14.507000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:29:15.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.24.4.217:22-172.24.4.1:37042 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:29:15.447436 systemd[1]: Started sshd@23-172.24.4.217:22-172.24.4.1:37042.service. Feb 9 19:29:17.246000 audit[5767]: USER_ACCT pid=5767 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:17.248006 sshd[5767]: Accepted publickey for core from 172.24.4.1 port 37042 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:29:17.248000 audit[5767]: CRED_ACQ pid=5767 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:17.248000 audit[5767]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffede96ea0 a2=3 a3=0 items=0 ppid=1 pid=5767 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:17.248000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:29:17.250869 sshd[5767]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:29:17.259443 systemd-logind[1122]: New session 24 of user core. Feb 9 19:29:17.260992 systemd[1]: Started session-24.scope. Feb 9 19:29:17.274000 audit[5767]: USER_START pid=5767 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:17.276000 audit[5772]: CRED_ACQ pid=5772 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:18.228847 sshd[5767]: pam_unix(sshd:session): session closed for user core Feb 9 19:29:18.231000 audit[5767]: USER_END pid=5767 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:18.232000 audit[5767]: CRED_DISP pid=5767 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:18.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.24.4.217:22-172.24.4.1:37042 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:29:18.236437 systemd-logind[1122]: Session 24 logged out. Waiting for processes to exit. Feb 9 19:29:18.236844 systemd[1]: sshd@23-172.24.4.217:22-172.24.4.1:37042.service: Deactivated successfully. Feb 9 19:29:18.238278 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 19:29:18.240655 systemd-logind[1122]: Removed session 24. Feb 9 19:29:23.235465 systemd[1]: Started sshd@24-172.24.4.217:22-172.24.4.1:37050.service. Feb 9 19:29:23.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.24.4.217:22-172.24.4.1:37050 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:29:23.240009 kernel: kauditd_printk_skb: 13 callbacks suppressed Feb 9 19:29:23.240130 kernel: audit: type=1130 audit(1707506963.235:501): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.24.4.217:22-172.24.4.1:37050 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:29:24.734000 audit[5782]: USER_ACCT pid=5782 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:24.748460 sshd[5782]: Accepted publickey for core from 172.24.4.1 port 37050 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:29:24.749067 kernel: audit: type=1101 audit(1707506964.734:502): pid=5782 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:24.747000 audit[5782]: CRED_ACQ pid=5782 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:24.749622 sshd[5782]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:29:24.762450 kernel: audit: type=1103 audit(1707506964.747:503): pid=5782 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:24.762689 kernel: audit: type=1006 audit(1707506964.747:504): pid=5782 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Feb 9 19:29:24.747000 audit[5782]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffdb7d5f30 a2=3 a3=0 items=0 ppid=1 pid=5782 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:24.789975 kernel: audit: type=1300 audit(1707506964.747:504): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffdb7d5f30 a2=3 a3=0 items=0 ppid=1 pid=5782 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:24.790127 kernel: audit: type=1327 audit(1707506964.747:504): proctitle=737368643A20636F7265205B707269765D Feb 9 19:29:24.747000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:29:24.792959 env[1135]: time="2024-02-09T19:29:24.788540815Z" level=info msg="StopPodSandbox for \"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\"" Feb 9 19:29:24.792212 systemd-logind[1122]: New session 25 of user core. Feb 9 19:29:24.793329 systemd[1]: Started session-25.scope. Feb 9 19:29:24.813000 audit[5782]: USER_START pid=5782 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:24.828970 kernel: audit: type=1105 audit(1707506964.813:505): pid=5782 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:24.814000 audit[5797]: CRED_ACQ pid=5797 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:24.839934 kernel: audit: type=1103 audit(1707506964.814:506): pid=5797 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:25.344034 env[1135]: 2024-02-09 19:29:25.254 [WARNING][5802] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--gcw56-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"93fdcc9f-1773-437d-8e12-9052cf2f26e5", ResourceVersion:"1100", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 26, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-b-76a749f546.novalocal", ContainerID:"0345642b477cdaa855cf4c3f2c0cad73f4d6cbdaa2eeb3dc5c189f2878a038d1", Pod:"coredns-787d4945fb-gcw56", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6e99fa2c140", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:29:25.344034 env[1135]: 2024-02-09 19:29:25.263 [INFO][5802] k8s.go 578: Cleaning up netns ContainerID="c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281" Feb 9 19:29:25.344034 env[1135]: 2024-02-09 19:29:25.263 [INFO][5802] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281" iface="eth0" netns="" Feb 9 19:29:25.344034 env[1135]: 2024-02-09 19:29:25.264 [INFO][5802] k8s.go 585: Releasing IP address(es) ContainerID="c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281" Feb 9 19:29:25.344034 env[1135]: 2024-02-09 19:29:25.264 [INFO][5802] utils.go 188: Calico CNI releasing IP address ContainerID="c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281" Feb 9 19:29:25.344034 env[1135]: 2024-02-09 19:29:25.325 [INFO][5814] ipam_plugin.go 415: Releasing address using handleID ContainerID="c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281" HandleID="k8s-pod-network.c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--gcw56-eth0" Feb 9 19:29:25.344034 env[1135]: 2024-02-09 19:29:25.326 [INFO][5814] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:29:25.344034 env[1135]: 2024-02-09 19:29:25.326 [INFO][5814] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:29:25.344034 env[1135]: 2024-02-09 19:29:25.338 [WARNING][5814] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281" HandleID="k8s-pod-network.c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--gcw56-eth0" Feb 9 19:29:25.344034 env[1135]: 2024-02-09 19:29:25.338 [INFO][5814] ipam_plugin.go 443: Releasing address using workloadID ContainerID="c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281" HandleID="k8s-pod-network.c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--gcw56-eth0" Feb 9 19:29:25.344034 env[1135]: 2024-02-09 19:29:25.340 [INFO][5814] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:29:25.344034 env[1135]: 2024-02-09 19:29:25.341 [INFO][5802] k8s.go 591: Teardown processing complete. ContainerID="c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281" Feb 9 19:29:25.344672 env[1135]: time="2024-02-09T19:29:25.344638938Z" level=info msg="TearDown network for sandbox \"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\" successfully" Feb 9 19:29:25.344753 env[1135]: time="2024-02-09T19:29:25.344734168Z" level=info msg="StopPodSandbox for \"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\" returns successfully" Feb 9 19:29:25.346652 env[1135]: time="2024-02-09T19:29:25.346589415Z" level=info msg="RemovePodSandbox for \"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\"" Feb 9 19:29:25.346731 env[1135]: time="2024-02-09T19:29:25.346637106Z" level=info msg="Forcibly stopping sandbox \"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\"" Feb 9 19:29:25.475722 env[1135]: 2024-02-09 19:29:25.410 [WARNING][5834] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--gcw56-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"93fdcc9f-1773-437d-8e12-9052cf2f26e5", ResourceVersion:"1100", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 26, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-b-76a749f546.novalocal", ContainerID:"0345642b477cdaa855cf4c3f2c0cad73f4d6cbdaa2eeb3dc5c189f2878a038d1", Pod:"coredns-787d4945fb-gcw56", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6e99fa2c140", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:29:25.475722 env[1135]: 2024-02-09 19:29:25.411 [INFO][5834] k8s.go 578: Cleaning up netns ContainerID="c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281" Feb 9 19:29:25.475722 env[1135]: 2024-02-09 19:29:25.411 [INFO][5834] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281" iface="eth0" netns="" Feb 9 19:29:25.475722 env[1135]: 2024-02-09 19:29:25.411 [INFO][5834] k8s.go 585: Releasing IP address(es) ContainerID="c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281" Feb 9 19:29:25.475722 env[1135]: 2024-02-09 19:29:25.411 [INFO][5834] utils.go 188: Calico CNI releasing IP address ContainerID="c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281" Feb 9 19:29:25.475722 env[1135]: 2024-02-09 19:29:25.436 [INFO][5841] ipam_plugin.go 415: Releasing address using handleID ContainerID="c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281" HandleID="k8s-pod-network.c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--gcw56-eth0" Feb 9 19:29:25.475722 env[1135]: 2024-02-09 19:29:25.436 [INFO][5841] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:29:25.475722 env[1135]: 2024-02-09 19:29:25.436 [INFO][5841] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:29:25.475722 env[1135]: 2024-02-09 19:29:25.468 [WARNING][5841] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281" HandleID="k8s-pod-network.c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--gcw56-eth0" Feb 9 19:29:25.475722 env[1135]: 2024-02-09 19:29:25.468 [INFO][5841] ipam_plugin.go 443: Releasing address using workloadID ContainerID="c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281" HandleID="k8s-pod-network.c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--gcw56-eth0" Feb 9 19:29:25.475722 env[1135]: 2024-02-09 19:29:25.472 [INFO][5841] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:29:25.475722 env[1135]: 2024-02-09 19:29:25.474 [INFO][5834] k8s.go 591: Teardown processing complete. ContainerID="c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281" Feb 9 19:29:25.476407 env[1135]: time="2024-02-09T19:29:25.476364600Z" level=info msg="TearDown network for sandbox \"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\" successfully" Feb 9 19:29:25.559462 env[1135]: time="2024-02-09T19:29:25.559409703Z" level=info msg="RemovePodSandbox \"c45ac6b75f727532ecc096244cd4a91f6cfb814d44e57d40394f1ed62f54e281\" returns successfully" Feb 9 19:29:25.560046 env[1135]: time="2024-02-09T19:29:25.560022469Z" level=info msg="StopPodSandbox for \"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\"" Feb 9 19:29:25.611790 sshd[5782]: pam_unix(sshd:session): session closed for user core Feb 9 19:29:25.631675 kernel: audit: type=1106 audit(1707506965.613:507): pid=5782 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:25.631836 kernel: audit: type=1104 audit(1707506965.613:508): pid=5782 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:25.613000 audit[5782]: USER_END pid=5782 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:25.613000 audit[5782]: CRED_DISP pid=5782 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:25.615885 systemd[1]: sshd@24-172.24.4.217:22-172.24.4.1:37050.service: Deactivated successfully. Feb 9 19:29:25.616833 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 19:29:25.633163 systemd-logind[1122]: Session 25 logged out. Waiting for processes to exit. Feb 9 19:29:25.634520 systemd-logind[1122]: Removed session 25. Feb 9 19:29:25.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.24.4.217:22-172.24.4.1:37050 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:29:25.764493 env[1135]: 2024-02-09 19:29:25.697 [WARNING][5859] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--b--76a749f546.novalocal-k8s-calico--kube--controllers--6f585564b5--stdvx-eth0", GenerateName:"calico-kube-controllers-6f585564b5-", Namespace:"calico-system", SelfLink:"", UID:"d9a58a43-cc9b-49e7-89c7-8d2f444dd31a", ResourceVersion:"1128", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 26, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f585564b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-b-76a749f546.novalocal", ContainerID:"4d46a223ed749a71899b388f2fe587f04a77aa9ea8cc708a3a058342cc5b9a43", Pod:"calico-kube-controllers-6f585564b5-stdvx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.2.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8bd96fc96a9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:29:25.764493 env[1135]: 2024-02-09 19:29:25.697 [INFO][5859] k8s.go 578: Cleaning up netns ContainerID="e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26" Feb 9 19:29:25.764493 env[1135]: 2024-02-09 19:29:25.697 [INFO][5859] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26" iface="eth0" netns="" Feb 9 19:29:25.764493 env[1135]: 2024-02-09 19:29:25.697 [INFO][5859] k8s.go 585: Releasing IP address(es) ContainerID="e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26" Feb 9 19:29:25.764493 env[1135]: 2024-02-09 19:29:25.698 [INFO][5859] utils.go 188: Calico CNI releasing IP address ContainerID="e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26" Feb 9 19:29:25.764493 env[1135]: 2024-02-09 19:29:25.750 [INFO][5868] ipam_plugin.go 415: Releasing address using handleID ContainerID="e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26" HandleID="k8s-pod-network.e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-calico--kube--controllers--6f585564b5--stdvx-eth0" Feb 9 19:29:25.764493 env[1135]: 2024-02-09 19:29:25.750 [INFO][5868] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:29:25.764493 env[1135]: 2024-02-09 19:29:25.750 [INFO][5868] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:29:25.764493 env[1135]: 2024-02-09 19:29:25.760 [WARNING][5868] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26" HandleID="k8s-pod-network.e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-calico--kube--controllers--6f585564b5--stdvx-eth0" Feb 9 19:29:25.764493 env[1135]: 2024-02-09 19:29:25.760 [INFO][5868] ipam_plugin.go 443: Releasing address using workloadID ContainerID="e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26" HandleID="k8s-pod-network.e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-calico--kube--controllers--6f585564b5--stdvx-eth0" Feb 9 19:29:25.764493 env[1135]: 2024-02-09 19:29:25.761 [INFO][5868] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:29:25.764493 env[1135]: 2024-02-09 19:29:25.763 [INFO][5859] k8s.go 591: Teardown processing complete. ContainerID="e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26" Feb 9 19:29:25.764493 env[1135]: time="2024-02-09T19:29:25.764471622Z" level=info msg="TearDown network for sandbox \"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\" successfully" Feb 9 19:29:25.765870 env[1135]: time="2024-02-09T19:29:25.764502772Z" level=info msg="StopPodSandbox for \"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\" returns successfully" Feb 9 19:29:25.765870 env[1135]: time="2024-02-09T19:29:25.765000871Z" level=info msg="RemovePodSandbox for \"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\"" Feb 9 19:29:25.765870 env[1135]: time="2024-02-09T19:29:25.765030456Z" level=info msg="Forcibly stopping sandbox \"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\"" Feb 9 19:29:25.858493 env[1135]: 2024-02-09 19:29:25.810 [WARNING][5886] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--b--76a749f546.novalocal-k8s-calico--kube--controllers--6f585564b5--stdvx-eth0", GenerateName:"calico-kube-controllers-6f585564b5-", Namespace:"calico-system", SelfLink:"", UID:"d9a58a43-cc9b-49e7-89c7-8d2f444dd31a", ResourceVersion:"1128", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 26, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f585564b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-b-76a749f546.novalocal", ContainerID:"4d46a223ed749a71899b388f2fe587f04a77aa9ea8cc708a3a058342cc5b9a43", Pod:"calico-kube-controllers-6f585564b5-stdvx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.2.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8bd96fc96a9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:29:25.858493 env[1135]: 2024-02-09 19:29:25.810 [INFO][5886] k8s.go 578: Cleaning up netns ContainerID="e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26" Feb 9 19:29:25.858493 env[1135]: 2024-02-09 19:29:25.810 [INFO][5886] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26" iface="eth0" netns="" Feb 9 19:29:25.858493 env[1135]: 2024-02-09 19:29:25.810 [INFO][5886] k8s.go 585: Releasing IP address(es) ContainerID="e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26" Feb 9 19:29:25.858493 env[1135]: 2024-02-09 19:29:25.810 [INFO][5886] utils.go 188: Calico CNI releasing IP address ContainerID="e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26" Feb 9 19:29:25.858493 env[1135]: 2024-02-09 19:29:25.834 [INFO][5892] ipam_plugin.go 415: Releasing address using handleID ContainerID="e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26" HandleID="k8s-pod-network.e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-calico--kube--controllers--6f585564b5--stdvx-eth0" Feb 9 19:29:25.858493 env[1135]: 2024-02-09 19:29:25.834 [INFO][5892] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:29:25.858493 env[1135]: 2024-02-09 19:29:25.834 [INFO][5892] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:29:25.858493 env[1135]: 2024-02-09 19:29:25.852 [WARNING][5892] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26" HandleID="k8s-pod-network.e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-calico--kube--controllers--6f585564b5--stdvx-eth0" Feb 9 19:29:25.858493 env[1135]: 2024-02-09 19:29:25.853 [INFO][5892] ipam_plugin.go 443: Releasing address using workloadID ContainerID="e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26" HandleID="k8s-pod-network.e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-calico--kube--controllers--6f585564b5--stdvx-eth0" Feb 9 19:29:25.858493 env[1135]: 2024-02-09 19:29:25.855 [INFO][5892] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:29:25.858493 env[1135]: 2024-02-09 19:29:25.857 [INFO][5886] k8s.go 591: Teardown processing complete. ContainerID="e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26" Feb 9 19:29:25.859486 env[1135]: time="2024-02-09T19:29:25.858530864Z" level=info msg="TearDown network for sandbox \"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\" successfully" Feb 9 19:29:25.864246 env[1135]: time="2024-02-09T19:29:25.864162001Z" level=info msg="RemovePodSandbox \"e1d1d0445b6b5b22221d017564629752363b1a2ba6b6ed7e02481a66fabffc26\" returns successfully" Feb 9 19:29:25.865718 env[1135]: time="2024-02-09T19:29:25.865682608Z" level=info msg="StopPodSandbox for \"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\"" Feb 9 19:29:25.964452 env[1135]: 2024-02-09 19:29:25.913 [WARNING][5910] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--b--76a749f546.novalocal-k8s-csi--node--driver--x2vsr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"258b5f6f-f507-494e-8282-83a91907d3f5", ResourceVersion:"1165", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 26, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-b-76a749f546.novalocal", ContainerID:"5c25219bb93d88f78ee9f3505599e366f70593133d36cbee9c3a8ab6729820a3", Pod:"csi-node-driver-x2vsr", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.2.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali3f586c8501f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:29:25.964452 env[1135]: 2024-02-09 19:29:25.913 [INFO][5910] k8s.go 578: Cleaning up netns ContainerID="8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e" Feb 9 19:29:25.964452 env[1135]: 2024-02-09 19:29:25.913 [INFO][5910] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e" iface="eth0" netns="" Feb 9 19:29:25.964452 env[1135]: 2024-02-09 19:29:25.913 [INFO][5910] k8s.go 585: Releasing IP address(es) ContainerID="8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e" Feb 9 19:29:25.964452 env[1135]: 2024-02-09 19:29:25.913 [INFO][5910] utils.go 188: Calico CNI releasing IP address ContainerID="8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e" Feb 9 19:29:25.964452 env[1135]: 2024-02-09 19:29:25.946 [INFO][5917] ipam_plugin.go 415: Releasing address using handleID ContainerID="8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e" HandleID="k8s-pod-network.8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-csi--node--driver--x2vsr-eth0" Feb 9 19:29:25.964452 env[1135]: 2024-02-09 19:29:25.947 [INFO][5917] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:29:25.964452 env[1135]: 2024-02-09 19:29:25.947 [INFO][5917] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:29:25.964452 env[1135]: 2024-02-09 19:29:25.955 [WARNING][5917] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e" HandleID="k8s-pod-network.8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-csi--node--driver--x2vsr-eth0" Feb 9 19:29:25.964452 env[1135]: 2024-02-09 19:29:25.955 [INFO][5917] ipam_plugin.go 443: Releasing address using workloadID ContainerID="8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e" HandleID="k8s-pod-network.8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-csi--node--driver--x2vsr-eth0" Feb 9 19:29:25.964452 env[1135]: 2024-02-09 19:29:25.957 [INFO][5917] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:29:25.964452 env[1135]: 2024-02-09 19:29:25.958 [INFO][5910] k8s.go 591: Teardown processing complete. ContainerID="8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e" Feb 9 19:29:25.964452 env[1135]: time="2024-02-09T19:29:25.963077178Z" level=info msg="TearDown network for sandbox \"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\" successfully" Feb 9 19:29:25.964452 env[1135]: time="2024-02-09T19:29:25.963106593Z" level=info msg="StopPodSandbox for \"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\" returns successfully" Feb 9 19:29:25.964452 env[1135]: time="2024-02-09T19:29:25.963548627Z" level=info msg="RemovePodSandbox for \"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\"" Feb 9 19:29:25.964452 env[1135]: time="2024-02-09T19:29:25.963576590Z" level=info msg="Forcibly stopping sandbox \"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\"" Feb 9 19:29:26.045244 env[1135]: 2024-02-09 19:29:26.002 [WARNING][5936] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--b--76a749f546.novalocal-k8s-csi--node--driver--x2vsr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"258b5f6f-f507-494e-8282-83a91907d3f5", ResourceVersion:"1165", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 26, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-b-76a749f546.novalocal", ContainerID:"5c25219bb93d88f78ee9f3505599e366f70593133d36cbee9c3a8ab6729820a3", Pod:"csi-node-driver-x2vsr", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.2.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali3f586c8501f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:29:26.045244 env[1135]: 2024-02-09 19:29:26.002 [INFO][5936] k8s.go 578: Cleaning up netns ContainerID="8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e" Feb 9 19:29:26.045244 env[1135]: 2024-02-09 19:29:26.002 [INFO][5936] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e" iface="eth0" netns="" Feb 9 19:29:26.045244 env[1135]: 2024-02-09 19:29:26.002 [INFO][5936] k8s.go 585: Releasing IP address(es) ContainerID="8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e" Feb 9 19:29:26.045244 env[1135]: 2024-02-09 19:29:26.002 [INFO][5936] utils.go 188: Calico CNI releasing IP address ContainerID="8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e" Feb 9 19:29:26.045244 env[1135]: 2024-02-09 19:29:26.025 [INFO][5943] ipam_plugin.go 415: Releasing address using handleID ContainerID="8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e" HandleID="k8s-pod-network.8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-csi--node--driver--x2vsr-eth0" Feb 9 19:29:26.045244 env[1135]: 2024-02-09 19:29:26.025 [INFO][5943] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:29:26.045244 env[1135]: 2024-02-09 19:29:26.025 [INFO][5943] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:29:26.045244 env[1135]: 2024-02-09 19:29:26.036 [WARNING][5943] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e" HandleID="k8s-pod-network.8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-csi--node--driver--x2vsr-eth0" Feb 9 19:29:26.045244 env[1135]: 2024-02-09 19:29:26.036 [INFO][5943] ipam_plugin.go 443: Releasing address using workloadID ContainerID="8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e" HandleID="k8s-pod-network.8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-csi--node--driver--x2vsr-eth0" Feb 9 19:29:26.045244 env[1135]: 2024-02-09 19:29:26.042 [INFO][5943] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:29:26.045244 env[1135]: 2024-02-09 19:29:26.043 [INFO][5936] k8s.go 591: Teardown processing complete. ContainerID="8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e" Feb 9 19:29:26.045718 env[1135]: time="2024-02-09T19:29:26.045273866Z" level=info msg="TearDown network for sandbox \"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\" successfully" Feb 9 19:29:26.049145 env[1135]: time="2024-02-09T19:29:26.049114167Z" level=info msg="RemovePodSandbox \"8c667171a6469007017a83c27879f1e44984d2a4f2751ec68f046f3b8876cb8e\" returns successfully" Feb 9 19:29:26.049687 env[1135]: time="2024-02-09T19:29:26.049660788Z" level=info msg="StopPodSandbox for \"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\"" Feb 9 19:29:26.131801 env[1135]: 2024-02-09 19:29:26.088 [WARNING][5961] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--f8g65-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"856ba022-e379-4cd0-87a4-cdfa313ac255", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 26, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-b-76a749f546.novalocal", ContainerID:"ecdfffdccba80dba34604ec3d325a76a280ecbf1d18b1b390ebfc1dba6cc518d", Pod:"coredns-787d4945fb-f8g65", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali91d2bc7512c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:29:26.131801 env[1135]: 2024-02-09 19:29:26.088 [INFO][5961] k8s.go 578: Cleaning up netns ContainerID="72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589" Feb 9 19:29:26.131801 env[1135]: 2024-02-09 19:29:26.088 [INFO][5961] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589" iface="eth0" netns="" Feb 9 19:29:26.131801 env[1135]: 2024-02-09 19:29:26.088 [INFO][5961] k8s.go 585: Releasing IP address(es) ContainerID="72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589" Feb 9 19:29:26.131801 env[1135]: 2024-02-09 19:29:26.088 [INFO][5961] utils.go 188: Calico CNI releasing IP address ContainerID="72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589" Feb 9 19:29:26.131801 env[1135]: 2024-02-09 19:29:26.117 [INFO][5967] ipam_plugin.go 415: Releasing address using handleID ContainerID="72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589" HandleID="k8s-pod-network.72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--f8g65-eth0" Feb 9 19:29:26.131801 env[1135]: 2024-02-09 19:29:26.117 [INFO][5967] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:29:26.131801 env[1135]: 2024-02-09 19:29:26.117 [INFO][5967] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:29:26.131801 env[1135]: 2024-02-09 19:29:26.126 [WARNING][5967] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589" HandleID="k8s-pod-network.72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--f8g65-eth0" Feb 9 19:29:26.131801 env[1135]: 2024-02-09 19:29:26.126 [INFO][5967] ipam_plugin.go 443: Releasing address using workloadID ContainerID="72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589" HandleID="k8s-pod-network.72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--f8g65-eth0" Feb 9 19:29:26.131801 env[1135]: 2024-02-09 19:29:26.128 [INFO][5967] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:29:26.131801 env[1135]: 2024-02-09 19:29:26.130 [INFO][5961] k8s.go 591: Teardown processing complete. ContainerID="72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589" Feb 9 19:29:26.132446 env[1135]: time="2024-02-09T19:29:26.132400897Z" level=info msg="TearDown network for sandbox \"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\" successfully" Feb 9 19:29:26.132534 env[1135]: time="2024-02-09T19:29:26.132515062Z" level=info msg="StopPodSandbox for \"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\" returns successfully" Feb 9 19:29:26.133530 env[1135]: time="2024-02-09T19:29:26.133494829Z" level=info msg="RemovePodSandbox for \"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\"" Feb 9 19:29:26.133612 env[1135]: time="2024-02-09T19:29:26.133539814Z" level=info msg="Forcibly stopping sandbox \"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\"" Feb 9 19:29:26.214072 env[1135]: 2024-02-09 19:29:26.171 [WARNING][5988] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--f8g65-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"856ba022-e379-4cd0-87a4-cdfa313ac255", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 26, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-b-76a749f546.novalocal", ContainerID:"ecdfffdccba80dba34604ec3d325a76a280ecbf1d18b1b390ebfc1dba6cc518d", Pod:"coredns-787d4945fb-f8g65", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali91d2bc7512c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:29:26.214072 env[1135]: 2024-02-09 19:29:26.171 [INFO][5988] k8s.go 578: Cleaning up netns ContainerID="72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589" Feb 9 19:29:26.214072 env[1135]: 2024-02-09 19:29:26.171 [INFO][5988] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589" iface="eth0" netns="" Feb 9 19:29:26.214072 env[1135]: 2024-02-09 19:29:26.171 [INFO][5988] k8s.go 585: Releasing IP address(es) ContainerID="72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589" Feb 9 19:29:26.214072 env[1135]: 2024-02-09 19:29:26.171 [INFO][5988] utils.go 188: Calico CNI releasing IP address ContainerID="72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589" Feb 9 19:29:26.214072 env[1135]: 2024-02-09 19:29:26.196 [INFO][5995] ipam_plugin.go 415: Releasing address using handleID ContainerID="72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589" HandleID="k8s-pod-network.72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--f8g65-eth0" Feb 9 19:29:26.214072 env[1135]: 2024-02-09 19:29:26.196 [INFO][5995] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:29:26.214072 env[1135]: 2024-02-09 19:29:26.197 [INFO][5995] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:29:26.214072 env[1135]: 2024-02-09 19:29:26.206 [WARNING][5995] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589" HandleID="k8s-pod-network.72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--f8g65-eth0" Feb 9 19:29:26.214072 env[1135]: 2024-02-09 19:29:26.206 [INFO][5995] ipam_plugin.go 443: Releasing address using workloadID ContainerID="72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589" HandleID="k8s-pod-network.72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589" Workload="ci--3510--3--2--b--76a749f546.novalocal-k8s-coredns--787d4945fb--f8g65-eth0" Feb 9 19:29:26.214072 env[1135]: 2024-02-09 19:29:26.210 [INFO][5995] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:29:26.214072 env[1135]: 2024-02-09 19:29:26.211 [INFO][5988] k8s.go 591: Teardown processing complete. ContainerID="72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589" Feb 9 19:29:26.214635 env[1135]: time="2024-02-09T19:29:26.214593683Z" level=info msg="TearDown network for sandbox \"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\" successfully" Feb 9 19:29:26.218324 env[1135]: time="2024-02-09T19:29:26.218294210Z" level=info msg="RemovePodSandbox \"72f95c3e7aca31b0227775ae213603528c94c736af94a0e2a011e5f09f1b4589\" returns successfully" Feb 9 19:29:29.121071 systemd[1]: run-containerd-runc-k8s.io-7dab9bb23639f8dd1be0275750646ed8b5e56b941d0d53153d98176fac0b94f4-runc.0GBju3.mount: Deactivated successfully. Feb 9 19:29:30.618708 systemd[1]: Started sshd@25-172.24.4.217:22-172.24.4.1:42290.service. Feb 9 19:29:30.624536 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:29:30.624639 kernel: audit: type=1130 audit(1707506970.619:510): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.24.4.217:22-172.24.4.1:42290 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:29:30.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.24.4.217:22-172.24.4.1:42290 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:29:31.934000 audit[6028]: USER_ACCT pid=6028 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:31.936028 sshd[6028]: Accepted publickey for core from 172.24.4.1 port 42290 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:29:31.946987 kernel: audit: type=1101 audit(1707506971.934:511): pid=6028 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:31.947181 kernel: audit: type=1103 audit(1707506971.945:512): pid=6028 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:31.945000 audit[6028]: CRED_ACQ pid=6028 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:31.953430 sshd[6028]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:29:31.946000 audit[6028]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc9cb657c0 a2=3 a3=0 items=0 ppid=1 pid=6028 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:31.978012 kernel: audit: type=1006 audit(1707506971.946:513): pid=6028 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Feb 9 19:29:31.978170 kernel: audit: type=1300 audit(1707506971.946:513): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc9cb657c0 a2=3 a3=0 items=0 ppid=1 pid=6028 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:31.978219 kernel: audit: type=1327 audit(1707506971.946:513): proctitle=737368643A20636F7265205B707269765D Feb 9 19:29:31.946000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:29:31.981107 systemd[1]: Started session-26.scope. Feb 9 19:29:31.986078 systemd-logind[1122]: New session 26 of user core. Feb 9 19:29:31.998000 audit[6028]: USER_START pid=6028 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:32.004921 kernel: audit: type=1105 audit(1707506971.998:514): pid=6028 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:32.005000 audit[6031]: CRED_ACQ pid=6031 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:32.010987 kernel: audit: type=1103 audit(1707506972.005:515): pid=6031 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:32.848068 sshd[6028]: pam_unix(sshd:session): session closed for user core Feb 9 19:29:32.849000 audit[6028]: USER_END pid=6028 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:32.854289 systemd-logind[1122]: Session 26 logged out. Waiting for processes to exit. Feb 9 19:29:32.857190 systemd[1]: sshd@25-172.24.4.217:22-172.24.4.1:42290.service: Deactivated successfully. Feb 9 19:29:32.858883 systemd[1]: session-26.scope: Deactivated successfully. Feb 9 19:29:32.862966 kernel: audit: type=1106 audit(1707506972.849:516): pid=6028 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:32.863225 systemd-logind[1122]: Removed session 26. Feb 9 19:29:32.849000 audit[6028]: CRED_DISP pid=6028 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:32.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.24.4.217:22-172.24.4.1:42290 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:29:32.878966 kernel: audit: type=1104 audit(1707506972.849:517): pid=6028 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:36.094170 systemd[1]: run-containerd-runc-k8s.io-37257eac6414535472a76009d1ced5f84dd88250f256518825d7d6b1be2306f2-runc.eQFLNm.mount: Deactivated successfully. Feb 9 19:29:36.445556 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:29:36.445784 kernel: audit: type=1325 audit(1707506976.435:519): table=filter:150 family=2 entries=7 op=nft_register_rule pid=6088 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:36.435000 audit[6088]: NETFILTER_CFG table=filter:150 family=2 entries=7 op=nft_register_rule pid=6088 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:36.435000 audit[6088]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7fff73edc8d0 a2=0 a3=7fff73edc8bc items=0 ppid=2269 pid=6088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:36.464945 kernel: audit: type=1300 audit(1707506976.435:519): arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7fff73edc8d0 a2=0 a3=7fff73edc8bc items=0 ppid=2269 pid=6088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:36.435000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:29:36.474961 kernel: audit: type=1327 audit(1707506976.435:519): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:29:36.446000 audit[6088]: NETFILTER_CFG table=nat:151 family=2 entries=205 op=nft_register_chain pid=6088 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:36.483946 kernel: audit: type=1325 audit(1707506976.446:520): table=nat:151 family=2 entries=205 op=nft_register_chain pid=6088 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:36.485048 kernel: audit: type=1300 audit(1707506976.446:520): arch=c000003e syscall=46 success=yes exit=70436 a0=3 a1=7fff73edc8d0 a2=0 a3=7fff73edc8bc items=0 ppid=2269 pid=6088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:36.446000 audit[6088]: SYSCALL arch=c000003e syscall=46 success=yes exit=70436 a0=3 a1=7fff73edc8d0 a2=0 a3=7fff73edc8bc items=0 ppid=2269 pid=6088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:36.446000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:29:36.504946 kernel: audit: type=1327 audit(1707506976.446:520): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:29:36.574000 audit[6114]: NETFILTER_CFG table=filter:152 family=2 entries=6 op=nft_register_rule pid=6114 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:36.574000 audit[6114]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffd1821c8d0 a2=0 a3=7ffd1821c8bc items=0 ppid=2269 pid=6114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:36.583873 kernel: audit: type=1325 audit(1707506976.574:521): table=filter:152 family=2 entries=6 op=nft_register_rule pid=6114 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:36.583944 kernel: audit: type=1300 audit(1707506976.574:521): arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffd1821c8d0 a2=0 a3=7ffd1821c8bc items=0 ppid=2269 pid=6114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:36.574000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:29:36.578000 audit[6114]: NETFILTER_CFG table=nat:153 family=2 entries=212 op=nft_register_chain pid=6114 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:36.589313 kernel: audit: type=1327 audit(1707506976.574:521): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:29:36.589363 kernel: audit: type=1325 audit(1707506976.578:522): table=nat:153 family=2 entries=212 op=nft_register_chain pid=6114 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:29:36.578000 audit[6114]: SYSCALL arch=c000003e syscall=46 success=yes exit=72324 a0=3 a1=7ffd1821c8d0 a2=0 a3=7ffd1821c8bc items=0 ppid=2269 pid=6114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:36.578000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:29:37.858049 systemd[1]: Started sshd@26-172.24.4.217:22-172.24.4.1:56918.service. Feb 9 19:29:37.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.24.4.217:22-172.24.4.1:56918 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:29:39.244000 audit[6115]: USER_ACCT pid=6115 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:39.246182 sshd[6115]: Accepted publickey for core from 172.24.4.1 port 56918 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:29:39.248906 sshd[6115]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:29:39.247000 audit[6115]: CRED_ACQ pid=6115 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:39.247000 audit[6115]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc044e4f80 a2=3 a3=0 items=0 ppid=1 pid=6115 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:29:39.247000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:29:39.282762 systemd-logind[1122]: New session 27 of user core. Feb 9 19:29:39.286283 systemd[1]: Started session-27.scope. Feb 9 19:29:39.298000 audit[6115]: USER_START pid=6115 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:39.301000 audit[6118]: CRED_ACQ pid=6118 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:40.089195 sshd[6115]: pam_unix(sshd:session): session closed for user core Feb 9 19:29:40.091000 audit[6115]: USER_END pid=6115 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:40.091000 audit[6115]: CRED_DISP pid=6115 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Feb 9 19:29:40.096501 systemd[1]: sshd@26-172.24.4.217:22-172.24.4.1:56918.service: Deactivated successfully. Feb 9 19:29:40.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.24.4.217:22-172.24.4.1:56918 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:29:40.099809 systemd[1]: session-27.scope: Deactivated successfully. Feb 9 19:29:40.100393 systemd-logind[1122]: Session 27 logged out. Waiting for processes to exit. Feb 9 19:29:40.104893 systemd-logind[1122]: Removed session 27. Feb 9 19:29:41.403945 systemd[1]: run-containerd-runc-k8s.io-8066a266be97154c9ecf7b7e04669e5b85203d3a84e76a3138f95ed24eb7f0ad-runc.XnRxj7.mount: Deactivated successfully.