Mar 17 18:41:12.855705 kernel: Linux version 5.15.179-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Mar 17 17:12:34 -00 2025 Mar 17 18:41:12.855724 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 18:41:12.855732 kernel: BIOS-provided physical RAM map: Mar 17 18:41:12.855738 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 17 18:41:12.855743 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 17 18:41:12.855748 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 17 18:41:12.855755 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 17 18:41:12.855760 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 17 18:41:12.855775 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 17 18:41:12.855781 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 17 18:41:12.855786 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 17 18:41:12.855792 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 17 18:41:12.855797 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 17 18:41:12.855803 kernel: NX (Execute Disable) protection: active Mar 17 18:41:12.855811 kernel: SMBIOS 2.8 present. Mar 17 18:41:12.855817 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 17 18:41:12.855823 kernel: Hypervisor detected: KVM Mar 17 18:41:12.855829 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 17 18:41:12.855834 kernel: kvm-clock: cpu 0, msr 3019a001, primary cpu clock Mar 17 18:41:12.855840 kernel: kvm-clock: using sched offset of 2381533586 cycles Mar 17 18:41:12.855847 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 17 18:41:12.855853 kernel: tsc: Detected 2794.750 MHz processor Mar 17 18:41:12.855859 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 17 18:41:12.855867 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 17 18:41:12.855893 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 17 18:41:12.855899 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 17 18:41:12.855905 kernel: Using GB pages for direct mapping Mar 17 18:41:12.855912 kernel: ACPI: Early table checksum verification disabled Mar 17 18:41:12.855918 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 17 18:41:12.855924 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:41:12.855930 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:41:12.855936 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:41:12.855943 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 17 18:41:12.855949 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:41:12.855955 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:41:12.855961 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:41:12.855968 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:41:12.855974 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Mar 17 18:41:12.855980 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Mar 17 18:41:12.855986 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 17 18:41:12.855995 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Mar 17 18:41:12.856002 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Mar 17 18:41:12.856008 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Mar 17 18:41:12.856015 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Mar 17 18:41:12.856021 kernel: No NUMA configuration found Mar 17 18:41:12.856028 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 17 18:41:12.856035 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 17 18:41:12.856042 kernel: Zone ranges: Mar 17 18:41:12.856048 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 17 18:41:12.856055 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 17 18:41:12.856061 kernel: Normal empty Mar 17 18:41:12.856068 kernel: Movable zone start for each node Mar 17 18:41:12.856074 kernel: Early memory node ranges Mar 17 18:41:12.856080 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 17 18:41:12.856087 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 17 18:41:12.856093 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 17 18:41:12.856101 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 18:41:12.856108 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 17 18:41:12.856114 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 17 18:41:12.856121 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 17 18:41:12.856127 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 17 18:41:12.856133 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 17 18:41:12.856140 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 17 18:41:12.856146 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 17 18:41:12.856153 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 17 18:41:12.856161 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 17 18:41:12.856167 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 17 18:41:12.856173 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 17 18:41:12.856180 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 17 18:41:12.856186 kernel: TSC deadline timer available Mar 17 18:41:12.856193 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 17 18:41:12.856199 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 17 18:41:12.856205 kernel: kvm-guest: setup PV sched yield Mar 17 18:41:12.856212 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 17 18:41:12.856219 kernel: Booting paravirtualized kernel on KVM Mar 17 18:41:12.856226 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 17 18:41:12.856233 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Mar 17 18:41:12.856239 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Mar 17 18:41:12.856246 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Mar 17 18:41:12.856252 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 17 18:41:12.856258 kernel: kvm-guest: setup async PF for cpu 0 Mar 17 18:41:12.856265 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Mar 17 18:41:12.856271 kernel: kvm-guest: PV spinlocks enabled Mar 17 18:41:12.856279 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 17 18:41:12.856285 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 17 18:41:12.856291 kernel: Policy zone: DMA32 Mar 17 18:41:12.856299 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 18:41:12.856306 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 18:41:12.856312 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 18:41:12.856319 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 18:41:12.856326 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 18:41:12.856333 kernel: Memory: 2436696K/2571752K available (12294K kernel code, 2278K rwdata, 13724K rodata, 47472K init, 4108K bss, 134796K reserved, 0K cma-reserved) Mar 17 18:41:12.856340 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 17 18:41:12.856346 kernel: ftrace: allocating 34580 entries in 136 pages Mar 17 18:41:12.856353 kernel: ftrace: allocated 136 pages with 2 groups Mar 17 18:41:12.856359 kernel: rcu: Hierarchical RCU implementation. Mar 17 18:41:12.856366 kernel: rcu: RCU event tracing is enabled. Mar 17 18:41:12.856373 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 17 18:41:12.856380 kernel: Rude variant of Tasks RCU enabled. Mar 17 18:41:12.856386 kernel: Tracing variant of Tasks RCU enabled. Mar 17 18:41:12.856394 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 18:41:12.856400 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 17 18:41:12.856407 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 17 18:41:12.856413 kernel: random: crng init done Mar 17 18:41:12.856419 kernel: Console: colour VGA+ 80x25 Mar 17 18:41:12.856426 kernel: printk: console [ttyS0] enabled Mar 17 18:41:12.856432 kernel: ACPI: Core revision 20210730 Mar 17 18:41:12.856439 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 17 18:41:12.856445 kernel: APIC: Switch to symmetric I/O mode setup Mar 17 18:41:12.856453 kernel: x2apic enabled Mar 17 18:41:12.856459 kernel: Switched APIC routing to physical x2apic. Mar 17 18:41:12.856466 kernel: kvm-guest: setup PV IPIs Mar 17 18:41:12.856472 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 17 18:41:12.856479 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 17 18:41:12.856485 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Mar 17 18:41:12.856492 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 17 18:41:12.856498 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 17 18:41:12.856505 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 17 18:41:12.856517 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 17 18:41:12.856524 kernel: Spectre V2 : Mitigation: Retpolines Mar 17 18:41:12.856531 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 17 18:41:12.856538 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 17 18:41:12.856545 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Mar 17 18:41:12.856552 kernel: RETBleed: Mitigation: untrained return thunk Mar 17 18:41:12.856559 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 17 18:41:12.856566 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Mar 17 18:41:12.856573 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 17 18:41:12.856581 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 17 18:41:12.856588 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 17 18:41:12.856594 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 17 18:41:12.856601 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Mar 17 18:41:12.856608 kernel: Freeing SMP alternatives memory: 32K Mar 17 18:41:12.856615 kernel: pid_max: default: 32768 minimum: 301 Mar 17 18:41:12.856621 kernel: LSM: Security Framework initializing Mar 17 18:41:12.856628 kernel: SELinux: Initializing. Mar 17 18:41:12.856636 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 18:41:12.856643 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 18:41:12.856650 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Mar 17 18:41:12.856657 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Mar 17 18:41:12.856664 kernel: ... version: 0 Mar 17 18:41:12.856670 kernel: ... bit width: 48 Mar 17 18:41:12.856677 kernel: ... generic registers: 6 Mar 17 18:41:12.856684 kernel: ... value mask: 0000ffffffffffff Mar 17 18:41:12.856691 kernel: ... max period: 00007fffffffffff Mar 17 18:41:12.856699 kernel: ... fixed-purpose events: 0 Mar 17 18:41:12.856705 kernel: ... event mask: 000000000000003f Mar 17 18:41:12.856712 kernel: signal: max sigframe size: 1776 Mar 17 18:41:12.856719 kernel: rcu: Hierarchical SRCU implementation. Mar 17 18:41:12.856726 kernel: smp: Bringing up secondary CPUs ... Mar 17 18:41:12.856732 kernel: x86: Booting SMP configuration: Mar 17 18:41:12.856739 kernel: .... node #0, CPUs: #1 Mar 17 18:41:12.856746 kernel: kvm-clock: cpu 1, msr 3019a041, secondary cpu clock Mar 17 18:41:12.856752 kernel: kvm-guest: setup async PF for cpu 1 Mar 17 18:41:12.856760 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Mar 17 18:41:12.856775 kernel: #2 Mar 17 18:41:12.856782 kernel: kvm-clock: cpu 2, msr 3019a081, secondary cpu clock Mar 17 18:41:12.856789 kernel: kvm-guest: setup async PF for cpu 2 Mar 17 18:41:12.856796 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Mar 17 18:41:12.856803 kernel: #3 Mar 17 18:41:12.856809 kernel: kvm-clock: cpu 3, msr 3019a0c1, secondary cpu clock Mar 17 18:41:12.856816 kernel: kvm-guest: setup async PF for cpu 3 Mar 17 18:41:12.856823 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Mar 17 18:41:12.856831 kernel: smp: Brought up 1 node, 4 CPUs Mar 17 18:41:12.856837 kernel: smpboot: Max logical packages: 1 Mar 17 18:41:12.856844 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Mar 17 18:41:12.856851 kernel: devtmpfs: initialized Mar 17 18:41:12.856858 kernel: x86/mm: Memory block size: 128MB Mar 17 18:41:12.856865 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 18:41:12.856880 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 17 18:41:12.856888 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 18:41:12.856894 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 18:41:12.856903 kernel: audit: initializing netlink subsys (disabled) Mar 17 18:41:12.856910 kernel: audit: type=2000 audit(1742236872.490:1): state=initialized audit_enabled=0 res=1 Mar 17 18:41:12.856917 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 18:41:12.856924 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 17 18:41:12.856930 kernel: cpuidle: using governor menu Mar 17 18:41:12.856937 kernel: ACPI: bus type PCI registered Mar 17 18:41:12.856944 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 18:41:12.856951 kernel: dca service started, version 1.12.1 Mar 17 18:41:12.856958 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 17 18:41:12.856966 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Mar 17 18:41:12.856973 kernel: PCI: Using configuration type 1 for base access Mar 17 18:41:12.856980 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 17 18:41:12.856987 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 18:41:12.856993 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 18:41:12.857000 kernel: ACPI: Added _OSI(Module Device) Mar 17 18:41:12.857007 kernel: ACPI: Added _OSI(Processor Device) Mar 17 18:41:12.857014 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 18:41:12.857020 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 18:41:12.857027 kernel: ACPI: Added _OSI(Linux-Dell-Video) Mar 17 18:41:12.857035 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Mar 17 18:41:12.857042 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Mar 17 18:41:12.857049 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 18:41:12.857056 kernel: ACPI: Interpreter enabled Mar 17 18:41:12.857062 kernel: ACPI: PM: (supports S0 S3 S5) Mar 17 18:41:12.857069 kernel: ACPI: Using IOAPIC for interrupt routing Mar 17 18:41:12.857076 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 17 18:41:12.857083 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 17 18:41:12.857090 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 18:41:12.857202 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 18:41:12.857277 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 17 18:41:12.857347 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 17 18:41:12.857357 kernel: PCI host bridge to bus 0000:00 Mar 17 18:41:12.857432 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 17 18:41:12.857496 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 17 18:41:12.857560 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 17 18:41:12.857623 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 17 18:41:12.857686 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 17 18:41:12.857749 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 17 18:41:12.857821 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 18:41:12.857918 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 17 18:41:12.857995 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 17 18:41:12.858068 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 17 18:41:12.858137 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 17 18:41:12.858206 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 17 18:41:12.858273 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 17 18:41:12.858355 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 17 18:41:12.858424 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 17 18:41:12.858497 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 17 18:41:12.858567 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 17 18:41:12.858643 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 17 18:41:12.858712 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 17 18:41:12.858792 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 17 18:41:12.858863 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 17 18:41:12.858950 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 17 18:41:12.859023 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 17 18:41:12.859094 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 17 18:41:12.859161 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 17 18:41:12.859229 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 17 18:41:12.859304 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 17 18:41:12.859373 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 17 18:41:12.859447 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 17 18:41:12.859517 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 17 18:41:12.859585 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 17 18:41:12.859660 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 17 18:41:12.859727 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 17 18:41:12.859736 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 17 18:41:12.859743 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 17 18:41:12.859750 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 17 18:41:12.859760 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 17 18:41:12.859775 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 17 18:41:12.859782 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 17 18:41:12.859789 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 17 18:41:12.859796 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 17 18:41:12.859819 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 17 18:41:12.859833 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 17 18:41:12.859847 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 17 18:41:12.859854 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 17 18:41:12.859863 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 17 18:41:12.859869 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 17 18:41:12.859894 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 17 18:41:12.859901 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 17 18:41:12.859908 kernel: iommu: Default domain type: Translated Mar 17 18:41:12.859915 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 17 18:41:12.860129 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 17 18:41:12.860206 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 17 18:41:12.860275 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 17 18:41:12.860288 kernel: vgaarb: loaded Mar 17 18:41:12.860295 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 17 18:41:12.860302 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 17 18:41:12.860309 kernel: PTP clock support registered Mar 17 18:41:12.860316 kernel: PCI: Using ACPI for IRQ routing Mar 17 18:41:12.860322 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 17 18:41:12.860329 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 17 18:41:12.860336 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 17 18:41:12.860343 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 17 18:41:12.860351 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 17 18:41:12.860357 kernel: clocksource: Switched to clocksource kvm-clock Mar 17 18:41:12.860364 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 18:41:12.860372 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 18:41:12.860378 kernel: pnp: PnP ACPI init Mar 17 18:41:12.860454 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 17 18:41:12.860465 kernel: pnp: PnP ACPI: found 6 devices Mar 17 18:41:12.860472 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 17 18:41:12.860481 kernel: NET: Registered PF_INET protocol family Mar 17 18:41:12.860487 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 18:41:12.860495 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 18:41:12.860502 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 18:41:12.860509 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 18:41:12.860516 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Mar 17 18:41:12.860523 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 18:41:12.860530 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 18:41:12.860537 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 18:41:12.860545 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 18:41:12.860569 kernel: NET: Registered PF_XDP protocol family Mar 17 18:41:12.860693 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 17 18:41:12.860788 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 17 18:41:12.860851 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 17 18:41:12.860926 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 17 18:41:12.860989 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 17 18:41:12.861051 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 17 18:41:12.861062 kernel: PCI: CLS 0 bytes, default 64 Mar 17 18:41:12.861069 kernel: Initialise system trusted keyrings Mar 17 18:41:12.861076 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 18:41:12.861083 kernel: Key type asymmetric registered Mar 17 18:41:12.861090 kernel: Asymmetric key parser 'x509' registered Mar 17 18:41:12.861097 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 17 18:41:12.861104 kernel: io scheduler mq-deadline registered Mar 17 18:41:12.861111 kernel: io scheduler kyber registered Mar 17 18:41:12.861118 kernel: io scheduler bfq registered Mar 17 18:41:12.861126 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 17 18:41:12.861133 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 17 18:41:12.861140 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 17 18:41:12.861147 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 17 18:41:12.861154 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 18:41:12.861161 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 17 18:41:12.861168 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 17 18:41:12.861175 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 17 18:41:12.861182 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 17 18:41:12.861257 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 17 18:41:12.861267 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 17 18:41:12.861330 kernel: rtc_cmos 00:04: registered as rtc0 Mar 17 18:41:12.861392 kernel: rtc_cmos 00:04: setting system clock to 2025-03-17T18:41:12 UTC (1742236872) Mar 17 18:41:12.861471 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 17 18:41:12.861481 kernel: NET: Registered PF_INET6 protocol family Mar 17 18:41:12.861488 kernel: Segment Routing with IPv6 Mar 17 18:41:12.861495 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 18:41:12.861504 kernel: NET: Registered PF_PACKET protocol family Mar 17 18:41:12.861511 kernel: Key type dns_resolver registered Mar 17 18:41:12.861518 kernel: IPI shorthand broadcast: enabled Mar 17 18:41:12.861525 kernel: sched_clock: Marking stable (451065488, 100838852)->(605175484, -53271144) Mar 17 18:41:12.861532 kernel: registered taskstats version 1 Mar 17 18:41:12.861539 kernel: Loading compiled-in X.509 certificates Mar 17 18:41:12.861546 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.179-flatcar: d5b956bbabb2d386c0246a969032c0de9eaa8220' Mar 17 18:41:12.861565 kernel: Key type .fscrypt registered Mar 17 18:41:12.861573 kernel: Key type fscrypt-provisioning registered Mar 17 18:41:12.861581 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 18:41:12.861588 kernel: ima: Allocated hash algorithm: sha1 Mar 17 18:41:12.861595 kernel: ima: No architecture policies found Mar 17 18:41:12.861602 kernel: clk: Disabling unused clocks Mar 17 18:41:12.861609 kernel: Freeing unused kernel image (initmem) memory: 47472K Mar 17 18:41:12.861616 kernel: Write protecting the kernel read-only data: 28672k Mar 17 18:41:12.861631 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Mar 17 18:41:12.861642 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K Mar 17 18:41:12.861649 kernel: Run /init as init process Mar 17 18:41:12.861657 kernel: with arguments: Mar 17 18:41:12.861664 kernel: /init Mar 17 18:41:12.861671 kernel: with environment: Mar 17 18:41:12.861677 kernel: HOME=/ Mar 17 18:41:12.861684 kernel: TERM=linux Mar 17 18:41:12.861701 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 18:41:12.861711 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:41:12.861720 systemd[1]: Detected virtualization kvm. Mar 17 18:41:12.861729 systemd[1]: Detected architecture x86-64. Mar 17 18:41:12.861737 systemd[1]: Running in initrd. Mar 17 18:41:12.861744 systemd[1]: No hostname configured, using default hostname. Mar 17 18:41:12.861762 systemd[1]: Hostname set to . Mar 17 18:41:12.861778 systemd[1]: Initializing machine ID from VM UUID. Mar 17 18:41:12.861786 systemd[1]: Queued start job for default target initrd.target. Mar 17 18:41:12.861793 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:41:12.861801 systemd[1]: Reached target cryptsetup.target. Mar 17 18:41:12.861810 systemd[1]: Reached target paths.target. Mar 17 18:41:12.861834 systemd[1]: Reached target slices.target. Mar 17 18:41:12.861843 systemd[1]: Reached target swap.target. Mar 17 18:41:12.861851 systemd[1]: Reached target timers.target. Mar 17 18:41:12.861859 systemd[1]: Listening on iscsid.socket. Mar 17 18:41:12.861868 systemd[1]: Listening on iscsiuio.socket. Mar 17 18:41:12.861892 systemd[1]: Listening on systemd-journald-audit.socket. Mar 17 18:41:12.861904 systemd[1]: Listening on systemd-journald-dev-log.socket. Mar 17 18:41:12.861912 systemd[1]: Listening on systemd-journald.socket. Mar 17 18:41:12.861920 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:41:12.861927 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:41:12.861935 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:41:12.861943 systemd[1]: Reached target sockets.target. Mar 17 18:41:12.861961 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:41:12.861971 systemd[1]: Finished network-cleanup.service. Mar 17 18:41:12.861979 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 18:41:12.861986 systemd[1]: Starting systemd-journald.service... Mar 17 18:41:12.861994 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:41:12.862002 systemd[1]: Starting systemd-resolved.service... Mar 17 18:41:12.862009 systemd[1]: Starting systemd-vconsole-setup.service... Mar 17 18:41:12.862017 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:41:12.862028 systemd-journald[196]: Journal started Mar 17 18:41:12.862080 systemd-journald[196]: Runtime Journal (/run/log/journal/93ea9d511061449f9cd1ca696623ad1f) is 6.0M, max 48.5M, 42.5M free. Mar 17 18:41:12.860750 systemd-modules-load[197]: Inserted module 'overlay' Mar 17 18:41:12.893499 kernel: audit: type=1130 audit(1742236872.888:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:12.893534 systemd[1]: Started systemd-journald.service. Mar 17 18:41:12.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:12.870930 systemd-resolved[198]: Positive Trust Anchors: Mar 17 18:41:12.897791 kernel: audit: type=1130 audit(1742236872.893:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:12.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:12.870938 systemd-resolved[198]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:41:12.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:12.870965 systemd-resolved[198]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:41:12.905154 kernel: audit: type=1130 audit(1742236872.897:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:12.905170 kernel: audit: type=1130 audit(1742236872.901:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:12.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:12.873041 systemd-resolved[198]: Defaulting to hostname 'linux'. Mar 17 18:41:12.917053 kernel: audit: type=1130 audit(1742236872.911:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:12.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:12.894527 systemd[1]: Started systemd-resolved.service. Mar 17 18:41:12.898540 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 18:41:12.901829 systemd[1]: Finished systemd-vconsole-setup.service. Mar 17 18:41:12.912627 systemd[1]: Reached target nss-lookup.target. Mar 17 18:41:12.916788 systemd[1]: Starting dracut-cmdline-ask.service... Mar 17 18:41:12.919113 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 18:41:12.926841 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 18:41:12.930573 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 18:41:12.930593 kernel: audit: type=1130 audit(1742236872.927:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:12.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:12.931233 systemd[1]: Finished dracut-cmdline-ask.service. Mar 17 18:41:12.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:12.932286 systemd[1]: Starting dracut-cmdline.service... Mar 17 18:41:12.936220 kernel: audit: type=1130 audit(1742236872.930:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:12.939502 dracut-cmdline[214]: dracut-dracut-053 Mar 17 18:41:12.941042 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 18:41:12.946225 kernel: Bridge firewalling registered Mar 17 18:41:12.946176 systemd-modules-load[197]: Inserted module 'br_netfilter' Mar 17 18:41:12.963902 kernel: SCSI subsystem initialized Mar 17 18:41:12.974917 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 18:41:12.974945 kernel: device-mapper: uevent: version 1.0.3 Mar 17 18:41:12.976200 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Mar 17 18:41:12.978925 systemd-modules-load[197]: Inserted module 'dm_multipath' Mar 17 18:41:12.980596 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:41:12.985487 kernel: audit: type=1130 audit(1742236872.980:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:12.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:12.982054 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:41:12.990803 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:41:12.995220 kernel: audit: type=1130 audit(1742236872.990:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:12.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:13.004905 kernel: Loading iSCSI transport class v2.0-870. Mar 17 18:41:13.020893 kernel: iscsi: registered transport (tcp) Mar 17 18:41:13.042227 kernel: iscsi: registered transport (qla4xxx) Mar 17 18:41:13.042253 kernel: QLogic iSCSI HBA Driver Mar 17 18:41:13.071115 systemd[1]: Finished dracut-cmdline.service. Mar 17 18:41:13.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:13.072782 systemd[1]: Starting dracut-pre-udev.service... Mar 17 18:41:13.117919 kernel: raid6: avx2x4 gen() 30958 MB/s Mar 17 18:41:13.134904 kernel: raid6: avx2x4 xor() 7843 MB/s Mar 17 18:41:13.151898 kernel: raid6: avx2x2 gen() 21498 MB/s Mar 17 18:41:13.168907 kernel: raid6: avx2x2 xor() 14102 MB/s Mar 17 18:41:13.185896 kernel: raid6: avx2x1 gen() 25183 MB/s Mar 17 18:41:13.202903 kernel: raid6: avx2x1 xor() 15254 MB/s Mar 17 18:41:13.219901 kernel: raid6: sse2x4 gen() 14831 MB/s Mar 17 18:41:13.236895 kernel: raid6: sse2x4 xor() 7509 MB/s Mar 17 18:41:13.253892 kernel: raid6: sse2x2 gen() 16448 MB/s Mar 17 18:41:13.270901 kernel: raid6: sse2x2 xor() 9864 MB/s Mar 17 18:41:13.287894 kernel: raid6: sse2x1 gen() 12536 MB/s Mar 17 18:41:13.305264 kernel: raid6: sse2x1 xor() 7817 MB/s Mar 17 18:41:13.305274 kernel: raid6: using algorithm avx2x4 gen() 30958 MB/s Mar 17 18:41:13.305283 kernel: raid6: .... xor() 7843 MB/s, rmw enabled Mar 17 18:41:13.305978 kernel: raid6: using avx2x2 recovery algorithm Mar 17 18:41:13.317897 kernel: xor: automatically using best checksumming function avx Mar 17 18:41:13.405919 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Mar 17 18:41:13.413406 systemd[1]: Finished dracut-pre-udev.service. Mar 17 18:41:13.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:13.414000 audit: BPF prog-id=7 op=LOAD Mar 17 18:41:13.414000 audit: BPF prog-id=8 op=LOAD Mar 17 18:41:13.415460 systemd[1]: Starting systemd-udevd.service... Mar 17 18:41:13.426569 systemd-udevd[401]: Using default interface naming scheme 'v252'. Mar 17 18:41:13.430226 systemd[1]: Started systemd-udevd.service. Mar 17 18:41:13.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:13.432110 systemd[1]: Starting dracut-pre-trigger.service... Mar 17 18:41:13.440347 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Mar 17 18:41:13.462014 systemd[1]: Finished dracut-pre-trigger.service. Mar 17 18:41:13.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:13.463508 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:41:13.493679 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:41:13.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:13.523356 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 17 18:41:13.529090 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 18:41:13.529109 kernel: GPT:9289727 != 19775487 Mar 17 18:41:13.529118 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 18:41:13.529127 kernel: GPT:9289727 != 19775487 Mar 17 18:41:13.529140 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 18:41:13.529148 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:41:13.531913 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 18:41:13.535911 kernel: libata version 3.00 loaded. Mar 17 18:41:13.545297 kernel: ahci 0000:00:1f.2: version 3.0 Mar 17 18:41:13.568835 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 17 18:41:13.568852 kernel: AVX2 version of gcm_enc/dec engaged. Mar 17 18:41:13.568861 kernel: AES CTR mode by8 optimization enabled Mar 17 18:41:13.568870 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 17 18:41:13.568984 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 17 18:41:13.569061 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (440) Mar 17 18:41:13.569070 kernel: scsi host0: ahci Mar 17 18:41:13.569158 kernel: scsi host1: ahci Mar 17 18:41:13.569240 kernel: scsi host2: ahci Mar 17 18:41:13.569320 kernel: scsi host3: ahci Mar 17 18:41:13.569403 kernel: scsi host4: ahci Mar 17 18:41:13.569488 kernel: scsi host5: ahci Mar 17 18:41:13.569567 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 17 18:41:13.569577 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 17 18:41:13.569585 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 17 18:41:13.569594 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 17 18:41:13.569602 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 17 18:41:13.569611 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 17 18:41:13.562918 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Mar 17 18:41:13.601210 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:41:13.606287 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Mar 17 18:41:13.606536 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Mar 17 18:41:13.611349 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Mar 17 18:41:13.614534 systemd[1]: Starting disk-uuid.service... Mar 17 18:41:13.624165 disk-uuid[525]: Primary Header is updated. Mar 17 18:41:13.624165 disk-uuid[525]: Secondary Entries is updated. Mar 17 18:41:13.624165 disk-uuid[525]: Secondary Header is updated. Mar 17 18:41:13.627902 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:41:13.631897 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:41:13.878251 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 17 18:41:13.878318 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 17 18:41:13.878328 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 17 18:41:13.878348 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 17 18:41:13.878357 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 17 18:41:13.879906 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 17 18:41:13.880906 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 17 18:41:13.880919 kernel: ata3.00: applying bridge limits Mar 17 18:41:13.882190 kernel: ata3.00: configured for UDMA/100 Mar 17 18:41:13.882894 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 17 18:41:13.914911 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 17 18:41:13.931438 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 17 18:41:13.931456 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 17 18:41:14.631871 disk-uuid[526]: The operation has completed successfully. Mar 17 18:41:14.633203 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:41:14.651910 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 18:41:14.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:14.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:14.651993 systemd[1]: Finished disk-uuid.service. Mar 17 18:41:14.658216 systemd[1]: Starting verity-setup.service... Mar 17 18:41:14.670898 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 17 18:41:14.688993 systemd[1]: Found device dev-mapper-usr.device. Mar 17 18:41:14.690334 systemd[1]: Mounting sysusr-usr.mount... Mar 17 18:41:14.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:14.691930 systemd[1]: Finished verity-setup.service. Mar 17 18:41:14.747908 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Mar 17 18:41:14.748165 systemd[1]: Mounted sysusr-usr.mount. Mar 17 18:41:14.748556 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Mar 17 18:41:14.749599 systemd[1]: Starting ignition-setup.service... Mar 17 18:41:14.752019 systemd[1]: Starting parse-ip-for-networkd.service... Mar 17 18:41:14.763231 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 18:41:14.763271 kernel: BTRFS info (device vda6): using free space tree Mar 17 18:41:14.763281 kernel: BTRFS info (device vda6): has skinny extents Mar 17 18:41:14.772099 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 18:41:14.780223 systemd[1]: Finished ignition-setup.service. Mar 17 18:41:14.781755 systemd[1]: Starting ignition-fetch-offline.service... Mar 17 18:41:14.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:14.814712 systemd[1]: Finished parse-ip-for-networkd.service. Mar 17 18:41:14.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:14.815000 audit: BPF prog-id=9 op=LOAD Mar 17 18:41:14.816982 systemd[1]: Starting systemd-networkd.service... Mar 17 18:41:14.822738 ignition[650]: Ignition 2.14.0 Mar 17 18:41:14.822748 ignition[650]: Stage: fetch-offline Mar 17 18:41:14.822799 ignition[650]: no configs at "/usr/lib/ignition/base.d" Mar 17 18:41:14.822809 ignition[650]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:41:14.822919 ignition[650]: parsed url from cmdline: "" Mar 17 18:41:14.823475 ignition[650]: no config URL provided Mar 17 18:41:14.823481 ignition[650]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 18:41:14.823489 ignition[650]: no config at "/usr/lib/ignition/user.ign" Mar 17 18:41:14.823506 ignition[650]: op(1): [started] loading QEMU firmware config module Mar 17 18:41:14.823511 ignition[650]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 17 18:41:14.828956 ignition[650]: op(1): [finished] loading QEMU firmware config module Mar 17 18:41:14.842243 systemd-networkd[718]: lo: Link UP Mar 17 18:41:14.842253 systemd-networkd[718]: lo: Gained carrier Mar 17 18:41:14.842619 systemd-networkd[718]: Enumeration completed Mar 17 18:41:14.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:14.842692 systemd[1]: Started systemd-networkd.service. Mar 17 18:41:14.842813 systemd-networkd[718]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:41:14.844145 systemd-networkd[718]: eth0: Link UP Mar 17 18:41:14.844148 systemd-networkd[718]: eth0: Gained carrier Mar 17 18:41:14.844479 systemd[1]: Reached target network.target. Mar 17 18:41:14.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:14.846462 systemd[1]: Starting iscsiuio.service... Mar 17 18:41:14.850063 systemd[1]: Started iscsiuio.service. Mar 17 18:41:14.851552 systemd[1]: Starting iscsid.service... Mar 17 18:41:14.854576 iscsid[724]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:41:14.854576 iscsid[724]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Mar 17 18:41:14.854576 iscsid[724]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Mar 17 18:41:14.854576 iscsid[724]: If using hardware iscsi like qla4xxx this message can be ignored. Mar 17 18:41:14.854576 iscsid[724]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:41:14.854576 iscsid[724]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Mar 17 18:41:14.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:14.854900 systemd[1]: Started iscsid.service. Mar 17 18:41:14.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:14.855832 systemd[1]: Starting dracut-initqueue.service... Mar 17 18:41:14.864529 systemd[1]: Finished dracut-initqueue.service. Mar 17 18:41:14.867539 systemd[1]: Reached target remote-fs-pre.target. Mar 17 18:41:14.868883 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:41:14.869450 systemd[1]: Reached target remote-fs.target. Mar 17 18:41:14.870387 systemd[1]: Starting dracut-pre-mount.service... Mar 17 18:41:14.876760 systemd[1]: Finished dracut-pre-mount.service. Mar 17 18:41:14.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:14.902314 ignition[650]: parsing config with SHA512: 4931045414ee05b083aea36e83c8d5e2a55b845f024341032c2b7a7ab3085b6806403bc2a037b39761bae26614a26f6670d5814b2d9c3b5e6b124db720f6daec Mar 17 18:41:14.908695 unknown[650]: fetched base config from "system" Mar 17 18:41:14.908705 unknown[650]: fetched user config from "qemu" Mar 17 18:41:14.909154 ignition[650]: fetch-offline: fetch-offline passed Mar 17 18:41:14.909199 ignition[650]: Ignition finished successfully Mar 17 18:41:14.912806 systemd[1]: Finished ignition-fetch-offline.service. Mar 17 18:41:14.913938 systemd-networkd[718]: eth0: DHCPv4 address 10.0.0.81/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 18:41:14.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:14.915952 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 17 18:41:14.918539 systemd[1]: Starting ignition-kargs.service... Mar 17 18:41:14.926677 ignition[738]: Ignition 2.14.0 Mar 17 18:41:14.926684 ignition[738]: Stage: kargs Mar 17 18:41:14.926776 ignition[738]: no configs at "/usr/lib/ignition/base.d" Mar 17 18:41:14.926784 ignition[738]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:41:14.927691 ignition[738]: kargs: kargs passed Mar 17 18:41:14.927729 ignition[738]: Ignition finished successfully Mar 17 18:41:14.932181 systemd[1]: Finished ignition-kargs.service. Mar 17 18:41:14.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:14.934469 systemd[1]: Starting ignition-disks.service... Mar 17 18:41:14.942404 ignition[744]: Ignition 2.14.0 Mar 17 18:41:14.942413 ignition[744]: Stage: disks Mar 17 18:41:14.942500 ignition[744]: no configs at "/usr/lib/ignition/base.d" Mar 17 18:41:14.942509 ignition[744]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:41:14.946328 ignition[744]: disks: disks passed Mar 17 18:41:14.946374 ignition[744]: Ignition finished successfully Mar 17 18:41:14.948354 systemd[1]: Finished ignition-disks.service. Mar 17 18:41:14.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:14.948696 systemd[1]: Reached target initrd-root-device.target. Mar 17 18:41:14.950225 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:41:14.950537 systemd[1]: Reached target local-fs.target. Mar 17 18:41:14.953534 systemd[1]: Reached target sysinit.target. Mar 17 18:41:14.954958 systemd[1]: Reached target basic.target. Mar 17 18:41:14.957033 systemd[1]: Starting systemd-fsck-root.service... Mar 17 18:41:14.968305 systemd-fsck[752]: ROOT: clean, 623/553520 files, 56022/553472 blocks Mar 17 18:41:14.973278 systemd[1]: Finished systemd-fsck-root.service. Mar 17 18:41:14.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:14.975289 systemd[1]: Mounting sysroot.mount... Mar 17 18:41:14.982897 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Mar 17 18:41:14.983548 systemd[1]: Mounted sysroot.mount. Mar 17 18:41:14.985023 systemd[1]: Reached target initrd-root-fs.target. Mar 17 18:41:14.987449 systemd[1]: Mounting sysroot-usr.mount... Mar 17 18:41:14.989091 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Mar 17 18:41:14.989125 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 18:41:14.989142 systemd[1]: Reached target ignition-diskful.target. Mar 17 18:41:14.994475 systemd[1]: Mounted sysroot-usr.mount. Mar 17 18:41:14.996550 systemd[1]: Starting initrd-setup-root.service... Mar 17 18:41:15.000551 initrd-setup-root[762]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 18:41:15.004643 initrd-setup-root[770]: cut: /sysroot/etc/group: No such file or directory Mar 17 18:41:15.007170 initrd-setup-root[778]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 18:41:15.010822 initrd-setup-root[786]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 18:41:15.034763 systemd[1]: Finished initrd-setup-root.service. Mar 17 18:41:15.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:15.036257 systemd[1]: Starting ignition-mount.service... Mar 17 18:41:15.037588 systemd[1]: Starting sysroot-boot.service... Mar 17 18:41:15.040312 bash[803]: umount: /sysroot/usr/share/oem: not mounted. Mar 17 18:41:15.047127 ignition[804]: INFO : Ignition 2.14.0 Mar 17 18:41:15.048065 ignition[804]: INFO : Stage: mount Mar 17 18:41:15.048065 ignition[804]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 18:41:15.048065 ignition[804]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:41:15.050922 ignition[804]: INFO : mount: mount passed Mar 17 18:41:15.051887 ignition[804]: INFO : Ignition finished successfully Mar 17 18:41:15.051967 systemd[1]: Finished ignition-mount.service. Mar 17 18:41:15.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:15.059356 systemd[1]: Finished sysroot-boot.service. Mar 17 18:41:15.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:15.698944 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 18:41:15.704896 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (813) Mar 17 18:41:15.704974 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 18:41:15.707584 kernel: BTRFS info (device vda6): using free space tree Mar 17 18:41:15.707606 kernel: BTRFS info (device vda6): has skinny extents Mar 17 18:41:15.710688 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 18:41:15.712987 systemd[1]: Starting ignition-files.service... Mar 17 18:41:15.725318 ignition[833]: INFO : Ignition 2.14.0 Mar 17 18:41:15.725318 ignition[833]: INFO : Stage: files Mar 17 18:41:15.727008 ignition[833]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 18:41:15.727008 ignition[833]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:41:15.729961 ignition[833]: DEBUG : files: compiled without relabeling support, skipping Mar 17 18:41:15.731242 ignition[833]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 18:41:15.731242 ignition[833]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 18:41:15.734218 ignition[833]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 18:41:15.735721 ignition[833]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 18:41:15.735721 ignition[833]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 18:41:15.735305 unknown[833]: wrote ssh authorized keys file for user: core Mar 17 18:41:15.739678 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 17 18:41:15.739678 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 17 18:41:15.739678 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 18:41:15.739678 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Mar 17 18:41:15.778855 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 18:41:15.863343 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 18:41:15.863343 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 17 18:41:15.867479 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 18:41:15.867479 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:41:15.867479 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:41:15.867479 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:41:15.867479 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:41:15.867479 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:41:15.867479 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:41:15.867479 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:41:15.867479 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:41:15.867479 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 18:41:15.867479 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 18:41:15.867479 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 18:41:15.867479 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Mar 17 18:41:16.225359 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 17 18:41:16.623245 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 18:41:16.623245 ignition[833]: INFO : files: op(c): [started] processing unit "containerd.service" Mar 17 18:41:16.626572 ignition[833]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 17 18:41:16.628759 ignition[833]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 17 18:41:16.628759 ignition[833]: INFO : files: op(c): [finished] processing unit "containerd.service" Mar 17 18:41:16.628759 ignition[833]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Mar 17 18:41:16.633187 ignition[833]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:41:16.633187 ignition[833]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:41:16.633187 ignition[833]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Mar 17 18:41:16.633187 ignition[833]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Mar 17 18:41:16.639081 ignition[833]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 18:41:16.639081 ignition[833]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 18:41:16.639081 ignition[833]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Mar 17 18:41:16.639081 ignition[833]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 17 18:41:16.645291 ignition[833]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 18:41:16.645291 ignition[833]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Mar 17 18:41:16.645291 ignition[833]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 18:41:16.665462 ignition[833]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 18:41:16.665462 ignition[833]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Mar 17 18:41:16.668559 ignition[833]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:41:16.668559 ignition[833]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:41:16.668559 ignition[833]: INFO : files: files passed Mar 17 18:41:16.668559 ignition[833]: INFO : Ignition finished successfully Mar 17 18:41:16.674251 systemd[1]: Finished ignition-files.service. Mar 17 18:41:16.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:16.675469 systemd[1]: Starting initrd-setup-root-after-ignition.service... Mar 17 18:41:16.676265 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Mar 17 18:41:16.676868 systemd[1]: Starting ignition-quench.service... Mar 17 18:41:16.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:16.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:16.679831 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 18:41:16.679930 systemd[1]: Finished ignition-quench.service. Mar 17 18:41:16.684778 initrd-setup-root-after-ignition[858]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Mar 17 18:41:16.687368 initrd-setup-root-after-ignition[860]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 18:41:16.689219 systemd[1]: Finished initrd-setup-root-after-ignition.service. Mar 17 18:41:16.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:16.689684 systemd[1]: Reached target ignition-complete.target. Mar 17 18:41:16.692648 systemd[1]: Starting initrd-parse-etc.service... Mar 17 18:41:16.703664 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 18:41:16.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:16.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:16.703748 systemd[1]: Finished initrd-parse-etc.service. Mar 17 18:41:16.704228 systemd[1]: Reached target initrd-fs.target. Mar 17 18:41:16.705804 systemd[1]: Reached target initrd.target. Mar 17 18:41:16.707255 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Mar 17 18:41:16.708260 systemd[1]: Starting dracut-pre-pivot.service... Mar 17 18:41:16.720099 systemd[1]: Finished dracut-pre-pivot.service. Mar 17 18:41:16.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:16.721626 systemd[1]: Starting initrd-cleanup.service... Mar 17 18:41:16.729570 systemd[1]: Stopped target nss-lookup.target. Mar 17 18:41:16.730208 systemd[1]: Stopped target remote-cryptsetup.target. Mar 17 18:41:16.731483 systemd[1]: Stopped target timers.target. Mar 17 18:41:16.731797 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 18:41:16.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:16.731888 systemd[1]: Stopped dracut-pre-pivot.service. Mar 17 18:41:16.734358 systemd[1]: Stopped target initrd.target. Mar 17 18:41:16.735923 systemd[1]: Stopped target basic.target. Mar 17 18:41:16.737276 systemd[1]: Stopped target ignition-complete.target. Mar 17 18:41:16.738588 systemd[1]: Stopped target ignition-diskful.target. Mar 17 18:41:16.740204 systemd[1]: Stopped target initrd-root-device.target. Mar 17 18:41:16.741578 systemd[1]: Stopped target remote-fs.target. Mar 17 18:41:16.743146 systemd[1]: Stopped target remote-fs-pre.target. Mar 17 18:41:16.743458 systemd[1]: Stopped target sysinit.target. Mar 17 18:41:16.746189 systemd[1]: Stopped target local-fs.target. Mar 17 18:41:16.747428 systemd[1]: Stopped target local-fs-pre.target. Mar 17 18:41:16.748815 systemd[1]: Stopped target swap.target. Mar 17 18:41:16.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:16.750225 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 18:41:16.750304 systemd[1]: Stopped dracut-pre-mount.service. Mar 17 18:41:16.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:16.751678 systemd[1]: Stopped target cryptsetup.target. Mar 17 18:41:16.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:16.752092 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 18:41:16.752164 systemd[1]: Stopped dracut-initqueue.service. Mar 17 18:41:16.754544 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 18:41:16.754620 systemd[1]: Stopped ignition-fetch-offline.service. Mar 17 18:41:16.756000 systemd[1]: Stopped target paths.target. Mar 17 18:41:16.757387 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 18:41:16.760927 systemd[1]: Stopped systemd-ask-password-console.path. Mar 17 18:41:16.761494 systemd[1]: Stopped target slices.target. Mar 17 18:41:16.763126 systemd[1]: Stopped target sockets.target. Mar 17 18:41:16.764587 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 18:41:16.767000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:16.764643 systemd[1]: Closed iscsid.socket. Mar 17 18:41:16.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:16.766250 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 18:41:16.766312 systemd[1]: Closed iscsiuio.socket. Mar 17 18:41:16.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:16.767356 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 18:41:16.767433 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Mar 17 18:41:16.768643 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 18:41:16.768726 systemd[1]: Stopped ignition-files.service. Mar 17 18:41:16.779171 ignition[873]: INFO : Ignition 2.14.0 Mar 17 18:41:16.779171 ignition[873]: INFO : Stage: umount Mar 17 18:41:16.779171 ignition[873]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 18:41:16.779171 ignition[873]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:41:16.779171 ignition[873]: INFO : umount: umount passed Mar 17 18:41:16.779171 ignition[873]: INFO : Ignition finished successfully Mar 17 18:41:16.778000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:16.770811 systemd[1]: Stopping ignition-mount.service... Mar 17 18:41:16.771507 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 18:41:16.771598 systemd[1]: Stopped kmod-static-nodes.service. Mar 17 18:41:16.772478 systemd[1]: Stopping sysroot-boot.service... Mar 17 18:41:16.775627 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 18:41:16.776579 systemd[1]: Stopped systemd-udev-trigger.service. Mar 17 18:41:16.779224 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 18:41:16.779990 systemd[1]: Stopped dracut-pre-trigger.service. Mar 17 18:41:16.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:16.793043 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 18:41:16.794557 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 18:41:16.795476 systemd[1]: Stopped ignition-mount.service. Mar 17 18:41:16.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:16.797222 systemd[1]: Stopped target network.target. Mar 17 18:41:16.797972 systemd-networkd[718]: eth0: Gained IPv6LL Mar 17 18:41:16.798993 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 18:41:16.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:16.799027 systemd[1]: Stopped ignition-disks.service. Mar 17 18:41:16.801431 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 18:41:16.801465 systemd[1]: Stopped ignition-kargs.service. Mar 17 18:41:16.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:16.804515 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 18:41:16.804544 systemd[1]: Stopped ignition-setup.service. Mar 17 18:41:16.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:16.806923 systemd[1]: Stopping systemd-networkd.service... Mar 17 18:41:16.808725 systemd[1]: Stopping systemd-resolved.service... Mar 17 18:41:16.810563 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 18:41:16.811488 systemd[1]: Finished initrd-cleanup.service. Mar 17 18:41:16.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:16.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:16.813904 systemd-networkd[718]: eth0: DHCPv6 lease lost Mar 17 18:41:16.814885 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 18:41:16.814959 systemd[1]: Stopped systemd-networkd.service. Mar 17 18:41:16.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:16.817970 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 18:41:16.818000 systemd[1]: Closed systemd-networkd.socket. Mar 17 18:41:16.821025 systemd[1]: Stopping network-cleanup.service... Mar 17 18:41:16.821000 audit: BPF prog-id=9 op=UNLOAD Mar 17 18:41:16.822532 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 18:41:16.822569 systemd[1]: Stopped parse-ip-for-networkd.service. Mar 17 18:41:16.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:16.825127 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:41:16.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:16.825161 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:41:16.828163 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 18:41:16.828220 systemd[1]: Stopped systemd-modules-load.service. Mar 17 18:41:16.830041 systemd[1]: Stopping systemd-udevd.service... Mar 17 18:41:16.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:16.831306 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 18:41:16.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:16.831729 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 18:41:16.831803 systemd[1]: Stopped systemd-resolved.service. Mar 17 18:41:16.837635 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 18:41:16.837000 audit: BPF prog-id=6 op=UNLOAD Mar 17 18:41:16.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:16.837730 systemd[1]: Stopped network-cleanup.service. Mar 17 18:41:16.839525 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 18:41:16.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:16.839621 systemd[1]: Stopped systemd-udevd.service. Mar 17 18:41:16.841027 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 18:41:16.841055 systemd[1]: Closed systemd-udevd-control.socket. Mar 17 18:41:16.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:16.841504 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 18:41:16.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:16.841528 systemd[1]: Closed systemd-udevd-kernel.socket. Mar 17 18:41:16.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:16.841832 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 18:41:16.841859 systemd[1]: Stopped dracut-pre-udev.service. Mar 17 18:41:16.845989 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 18:41:16.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:16.846045 systemd[1]: Stopped dracut-cmdline.service. Mar 17 18:41:16.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:16.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:16.847904 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 18:41:16.847941 systemd[1]: Stopped dracut-cmdline-ask.service. Mar 17 18:41:16.850594 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Mar 17 18:41:16.852391 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 18:41:16.852436 systemd[1]: Stopped systemd-vconsole-setup.service. Mar 17 18:41:16.855440 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 18:41:16.855510 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Mar 17 18:41:16.865265 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 18:41:16.865350 systemd[1]: Stopped sysroot-boot.service. Mar 17 18:41:16.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:16.867185 systemd[1]: Reached target initrd-switch-root.target. Mar 17 18:41:16.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:16.868702 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 18:41:16.868744 systemd[1]: Stopped initrd-setup-root.service. Mar 17 18:41:16.869840 systemd[1]: Starting initrd-switch-root.service... Mar 17 18:41:16.876270 systemd[1]: Switching root. Mar 17 18:41:16.877000 audit: BPF prog-id=8 op=UNLOAD Mar 17 18:41:16.877000 audit: BPF prog-id=7 op=UNLOAD Mar 17 18:41:16.877000 audit: BPF prog-id=5 op=UNLOAD Mar 17 18:41:16.877000 audit: BPF prog-id=4 op=UNLOAD Mar 17 18:41:16.877000 audit: BPF prog-id=3 op=UNLOAD Mar 17 18:41:16.896989 iscsid[724]: iscsid shutting down. Mar 17 18:41:16.897894 systemd-journald[196]: Received SIGTERM from PID 1 (systemd). Mar 17 18:41:16.897934 systemd-journald[196]: Journal stopped Mar 17 18:41:19.451981 kernel: SELinux: Class mctp_socket not defined in policy. Mar 17 18:41:19.452034 kernel: SELinux: Class anon_inode not defined in policy. Mar 17 18:41:19.452050 kernel: SELinux: the above unknown classes and permissions will be allowed Mar 17 18:41:19.452061 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 18:41:19.452071 kernel: SELinux: policy capability open_perms=1 Mar 17 18:41:19.452081 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 18:41:19.452096 kernel: SELinux: policy capability always_check_network=0 Mar 17 18:41:19.452105 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 18:41:19.452115 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 18:41:19.452124 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 18:41:19.452134 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 18:41:19.452146 systemd[1]: Successfully loaded SELinux policy in 37.498ms. Mar 17 18:41:19.452163 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.685ms. Mar 17 18:41:19.452174 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:41:19.452185 systemd[1]: Detected virtualization kvm. Mar 17 18:41:19.452195 systemd[1]: Detected architecture x86-64. Mar 17 18:41:19.452205 systemd[1]: Detected first boot. Mar 17 18:41:19.452216 systemd[1]: Initializing machine ID from VM UUID. Mar 17 18:41:19.452226 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Mar 17 18:41:19.452236 kernel: kauditd_printk_skb: 71 callbacks suppressed Mar 17 18:41:19.452249 kernel: audit: type=1400 audit(1742236877.316:82): avc: denied { associate } for pid=927 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Mar 17 18:41:19.452261 kernel: audit: type=1300 audit(1742236877.316:82): arch=c000003e syscall=188 success=yes exit=0 a0=c0001876bc a1=c00002cb40 a2=c00002aa40 a3=32 items=0 ppid=910 pid=927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:19.452272 kernel: audit: type=1327 audit(1742236877.316:82): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:41:19.452283 kernel: audit: type=1400 audit(1742236877.319:83): avc: denied { associate } for pid=927 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Mar 17 18:41:19.452294 kernel: audit: type=1300 audit(1742236877.319:83): arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c000187795 a2=1ed a3=0 items=2 ppid=910 pid=927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:19.452304 kernel: audit: type=1307 audit(1742236877.319:83): cwd="/" Mar 17 18:41:19.452315 kernel: audit: type=1302 audit(1742236877.319:83): item=0 name=(null) inode=2 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:19.452325 kernel: audit: type=1302 audit(1742236877.319:83): item=1 name=(null) inode=3 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:19.452336 kernel: audit: type=1327 audit(1742236877.319:83): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:41:19.452345 systemd[1]: Populated /etc with preset unit settings. Mar 17 18:41:19.452356 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:41:19.452366 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:41:19.452381 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:41:19.452393 systemd[1]: Queued start job for default target multi-user.target. Mar 17 18:41:19.452405 systemd[1]: Unnecessary job was removed for dev-vda6.device. Mar 17 18:41:19.452416 systemd[1]: Created slice system-addon\x2dconfig.slice. Mar 17 18:41:19.452428 systemd[1]: Created slice system-addon\x2drun.slice. Mar 17 18:41:19.452438 systemd[1]: Created slice system-getty.slice. Mar 17 18:41:19.452448 systemd[1]: Created slice system-modprobe.slice. Mar 17 18:41:19.452459 systemd[1]: Created slice system-serial\x2dgetty.slice. Mar 17 18:41:19.452470 systemd[1]: Created slice system-system\x2dcloudinit.slice. Mar 17 18:41:19.452480 systemd[1]: Created slice system-systemd\x2dfsck.slice. Mar 17 18:41:19.452490 systemd[1]: Created slice user.slice. Mar 17 18:41:19.452502 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:41:19.452512 systemd[1]: Started systemd-ask-password-wall.path. Mar 17 18:41:19.452522 systemd[1]: Set up automount boot.automount. Mar 17 18:41:19.452532 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Mar 17 18:41:19.452543 systemd[1]: Reached target integritysetup.target. Mar 17 18:41:19.452553 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:41:19.452564 systemd[1]: Reached target remote-fs.target. Mar 17 18:41:19.452574 systemd[1]: Reached target slices.target. Mar 17 18:41:19.452584 systemd[1]: Reached target swap.target. Mar 17 18:41:19.452594 systemd[1]: Reached target torcx.target. Mar 17 18:41:19.452613 systemd[1]: Reached target veritysetup.target. Mar 17 18:41:19.452623 systemd[1]: Listening on systemd-coredump.socket. Mar 17 18:41:19.452634 systemd[1]: Listening on systemd-initctl.socket. Mar 17 18:41:19.452645 systemd[1]: Listening on systemd-journald-audit.socket. Mar 17 18:41:19.452657 kernel: audit: type=1400 audit(1742236879.368:84): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:41:19.452666 systemd[1]: Listening on systemd-journald-dev-log.socket. Mar 17 18:41:19.452676 systemd[1]: Listening on systemd-journald.socket. Mar 17 18:41:19.452687 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:41:19.452697 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:41:19.452708 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:41:19.452719 systemd[1]: Listening on systemd-userdbd.socket. Mar 17 18:41:19.452729 systemd[1]: Mounting dev-hugepages.mount... Mar 17 18:41:19.452743 systemd[1]: Mounting dev-mqueue.mount... Mar 17 18:41:19.452753 systemd[1]: Mounting media.mount... Mar 17 18:41:19.452764 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:41:19.452774 systemd[1]: Mounting sys-kernel-debug.mount... Mar 17 18:41:19.452785 systemd[1]: Mounting sys-kernel-tracing.mount... Mar 17 18:41:19.452795 systemd[1]: Mounting tmp.mount... Mar 17 18:41:19.452806 systemd[1]: Starting flatcar-tmpfiles.service... Mar 17 18:41:19.452816 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:41:19.452826 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:41:19.452837 systemd[1]: Starting modprobe@configfs.service... Mar 17 18:41:19.452847 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:41:19.452858 systemd[1]: Starting modprobe@drm.service... Mar 17 18:41:19.452868 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:41:19.452889 systemd[1]: Starting modprobe@fuse.service... Mar 17 18:41:19.452899 systemd[1]: Starting modprobe@loop.service... Mar 17 18:41:19.452911 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 18:41:19.452922 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 17 18:41:19.452932 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Mar 17 18:41:19.452941 systemd[1]: Starting systemd-journald.service... Mar 17 18:41:19.452952 kernel: loop: module loaded Mar 17 18:41:19.452962 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:41:19.452972 systemd[1]: Starting systemd-network-generator.service... Mar 17 18:41:19.452983 systemd[1]: Starting systemd-remount-fs.service... Mar 17 18:41:19.452992 kernel: fuse: init (API version 7.34) Mar 17 18:41:19.453004 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:41:19.453015 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:41:19.453025 systemd[1]: Mounted dev-hugepages.mount. Mar 17 18:41:19.453035 systemd[1]: Mounted dev-mqueue.mount. Mar 17 18:41:19.453047 systemd-journald[1022]: Journal started Mar 17 18:41:19.453083 systemd-journald[1022]: Runtime Journal (/run/log/journal/93ea9d511061449f9cd1ca696623ad1f) is 6.0M, max 48.5M, 42.5M free. Mar 17 18:41:19.368000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:41:19.368000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Mar 17 18:41:19.450000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Mar 17 18:41:19.450000 audit[1022]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffdc9eced70 a2=4000 a3=7ffdc9ecee0c items=0 ppid=1 pid=1022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:19.450000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Mar 17 18:41:19.454590 systemd[1]: Started systemd-journald.service. Mar 17 18:41:19.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:19.456168 systemd[1]: Mounted media.mount. Mar 17 18:41:19.456964 systemd[1]: Mounted sys-kernel-debug.mount. Mar 17 18:41:19.457885 systemd[1]: Mounted sys-kernel-tracing.mount. Mar 17 18:41:19.458770 systemd[1]: Mounted tmp.mount. Mar 17 18:41:19.460079 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:41:19.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:19.461144 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 18:41:19.461378 systemd[1]: Finished modprobe@configfs.service. Mar 17 18:41:19.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:19.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:19.462720 systemd[1]: Finished flatcar-tmpfiles.service. Mar 17 18:41:19.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:19.464048 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:41:19.464259 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:41:19.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:19.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:19.465370 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:41:19.465576 systemd[1]: Finished modprobe@drm.service. Mar 17 18:41:19.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:19.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:19.466637 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:41:19.466835 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:41:19.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:19.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:19.467913 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 18:41:19.468098 systemd[1]: Finished modprobe@fuse.service. Mar 17 18:41:19.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:19.468000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:19.469142 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:41:19.469541 systemd[1]: Finished modprobe@loop.service. Mar 17 18:41:19.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:19.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:19.471071 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:41:19.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:19.472445 systemd[1]: Finished systemd-network-generator.service. Mar 17 18:41:19.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:19.473815 systemd[1]: Finished systemd-remount-fs.service. Mar 17 18:41:19.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:19.475086 systemd[1]: Reached target network-pre.target. Mar 17 18:41:19.477003 systemd[1]: Mounting sys-fs-fuse-connections.mount... Mar 17 18:41:19.478592 systemd[1]: Mounting sys-kernel-config.mount... Mar 17 18:41:19.479355 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 18:41:19.481621 systemd[1]: Starting systemd-hwdb-update.service... Mar 17 18:41:19.483285 systemd[1]: Starting systemd-journal-flush.service... Mar 17 18:41:19.484118 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:41:19.484954 systemd[1]: Starting systemd-random-seed.service... Mar 17 18:41:19.485937 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:41:19.486920 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:41:19.489448 systemd[1]: Starting systemd-sysusers.service... Mar 17 18:41:19.492749 systemd[1]: Mounted sys-fs-fuse-connections.mount. Mar 17 18:41:19.493691 systemd[1]: Mounted sys-kernel-config.mount. Mar 17 18:41:19.494192 systemd-journald[1022]: Time spent on flushing to /var/log/journal/93ea9d511061449f9cd1ca696623ad1f is 19.225ms for 1033 entries. Mar 17 18:41:19.494192 systemd-journald[1022]: System Journal (/var/log/journal/93ea9d511061449f9cd1ca696623ad1f) is 8.0M, max 195.6M, 187.6M free. Mar 17 18:41:19.536219 systemd-journald[1022]: Received client request to flush runtime journal. Mar 17 18:41:19.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:19.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:19.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:19.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:19.498912 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:41:19.536591 udevadm[1061]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 17 18:41:19.501206 systemd[1]: Starting systemd-udev-settle.service... Mar 17 18:41:19.504154 systemd[1]: Finished systemd-random-seed.service. Mar 17 18:41:19.507360 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:41:19.511196 systemd[1]: Reached target first-boot-complete.target. Mar 17 18:41:19.516337 systemd[1]: Finished systemd-sysusers.service. Mar 17 18:41:19.518413 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 18:41:19.537052 systemd[1]: Finished systemd-journal-flush.service. Mar 17 18:41:19.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:19.543533 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 18:41:19.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:19.915465 systemd[1]: Finished systemd-hwdb-update.service. Mar 17 18:41:19.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:19.918295 systemd[1]: Starting systemd-udevd.service... Mar 17 18:41:19.934705 systemd-udevd[1071]: Using default interface naming scheme 'v252'. Mar 17 18:41:19.946567 systemd[1]: Started systemd-udevd.service. Mar 17 18:41:19.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:19.949993 systemd[1]: Starting systemd-networkd.service... Mar 17 18:41:19.957638 systemd[1]: Starting systemd-userdbd.service... Mar 17 18:41:19.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:19.998473 systemd[1]: Started systemd-userdbd.service. Mar 17 18:41:20.009662 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 17 18:41:20.006172 systemd[1]: Found device dev-ttyS0.device. Mar 17 18:41:20.013917 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:41:20.015912 kernel: ACPI: button: Power Button [PWRF] Mar 17 18:41:20.053749 systemd-networkd[1085]: lo: Link UP Mar 17 18:41:20.054147 systemd-networkd[1085]: lo: Gained carrier Mar 17 18:41:20.054685 systemd-networkd[1085]: Enumeration completed Mar 17 18:41:20.054894 systemd[1]: Started systemd-networkd.service. Mar 17 18:41:20.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.056078 systemd-networkd[1085]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:41:20.057357 systemd-networkd[1085]: eth0: Link UP Mar 17 18:41:20.057446 systemd-networkd[1085]: eth0: Gained carrier Mar 17 18:41:20.032000 audit[1087]: AVC avc: denied { confidentiality } for pid=1087 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Mar 17 18:41:20.032000 audit[1087]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5577065391b0 a1=338ac a2=7f38d03a0bc5 a3=5 items=110 ppid=1071 pid=1087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:20.032000 audit: CWD cwd="/" Mar 17 18:41:20.032000 audit: PATH item=0 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=1 name=(null) inode=13785 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=2 name=(null) inode=13785 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=3 name=(null) inode=13786 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=4 name=(null) inode=13785 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=5 name=(null) inode=13787 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=6 name=(null) inode=13785 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=7 name=(null) inode=13788 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=8 name=(null) inode=13788 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=9 name=(null) inode=13789 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=10 name=(null) inode=13788 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=11 name=(null) inode=13790 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=12 name=(null) inode=13788 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=13 name=(null) inode=13791 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=14 name=(null) inode=13788 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=15 name=(null) inode=13792 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=16 name=(null) inode=13788 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=17 name=(null) inode=13793 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=18 name=(null) inode=13785 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=19 name=(null) inode=13794 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=20 name=(null) inode=13794 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=21 name=(null) inode=13795 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=22 name=(null) inode=13794 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=23 name=(null) inode=13796 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=24 name=(null) inode=13794 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=25 name=(null) inode=13797 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=26 name=(null) inode=13794 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=27 name=(null) inode=13798 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=28 name=(null) inode=13794 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=29 name=(null) inode=13799 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=30 name=(null) inode=13785 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=31 name=(null) inode=13800 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=32 name=(null) inode=13800 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=33 name=(null) inode=13801 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=34 name=(null) inode=13800 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=35 name=(null) inode=13802 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=36 name=(null) inode=13800 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=37 name=(null) inode=13803 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=38 name=(null) inode=13800 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=39 name=(null) inode=13804 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=40 name=(null) inode=13800 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=41 name=(null) inode=13805 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=42 name=(null) inode=13785 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=43 name=(null) inode=13806 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=44 name=(null) inode=13806 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=45 name=(null) inode=13807 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=46 name=(null) inode=13806 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=47 name=(null) inode=13808 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=48 name=(null) inode=13806 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=49 name=(null) inode=13809 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=50 name=(null) inode=13806 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=51 name=(null) inode=13810 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=52 name=(null) inode=13806 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=53 name=(null) inode=13811 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=54 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=55 name=(null) inode=13812 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=56 name=(null) inode=13812 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=57 name=(null) inode=13813 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=58 name=(null) inode=13812 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=59 name=(null) inode=13814 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=60 name=(null) inode=13812 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=61 name=(null) inode=13815 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=62 name=(null) inode=13815 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=63 name=(null) inode=13816 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=64 name=(null) inode=13815 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=65 name=(null) inode=13817 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=66 name=(null) inode=13815 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=67 name=(null) inode=13818 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=68 name=(null) inode=13815 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=69 name=(null) inode=13819 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=70 name=(null) inode=13815 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=71 name=(null) inode=13820 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=72 name=(null) inode=13812 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=73 name=(null) inode=13821 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=74 name=(null) inode=13821 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=75 name=(null) inode=13822 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=76 name=(null) inode=13821 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=77 name=(null) inode=13823 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=78 name=(null) inode=13821 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=79 name=(null) inode=13824 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=80 name=(null) inode=13821 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=81 name=(null) inode=13825 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=82 name=(null) inode=13821 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=83 name=(null) inode=13826 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=84 name=(null) inode=13812 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=85 name=(null) inode=13827 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=86 name=(null) inode=13827 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=87 name=(null) inode=13828 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=88 name=(null) inode=13827 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=89 name=(null) inode=13829 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=90 name=(null) inode=13827 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=91 name=(null) inode=13830 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=92 name=(null) inode=13827 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=93 name=(null) inode=13831 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=94 name=(null) inode=13827 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=95 name=(null) inode=13832 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=96 name=(null) inode=13812 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=97 name=(null) inode=13833 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=98 name=(null) inode=13833 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=99 name=(null) inode=13834 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=100 name=(null) inode=13833 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=101 name=(null) inode=13835 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=102 name=(null) inode=13833 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=103 name=(null) inode=13836 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=104 name=(null) inode=13833 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=105 name=(null) inode=13837 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=106 name=(null) inode=13833 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=107 name=(null) inode=13838 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PATH item=109 name=(null) inode=13839 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:41:20.032000 audit: PROCTITLE proctitle="(udev-worker)" Mar 17 18:41:20.070027 systemd-networkd[1085]: eth0: DHCPv4 address 10.0.0.81/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 18:41:20.072914 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 17 18:41:20.073890 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 17 18:41:20.076634 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 18:41:20.076650 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 17 18:41:20.076757 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 17 18:41:20.134012 kernel: kvm: Nested Virtualization enabled Mar 17 18:41:20.134104 kernel: SVM: kvm: Nested Paging enabled Mar 17 18:41:20.134119 kernel: SVM: Virtual VMLOAD VMSAVE supported Mar 17 18:41:20.135573 kernel: SVM: Virtual GIF supported Mar 17 18:41:20.150899 kernel: EDAC MC: Ver: 3.0.0 Mar 17 18:41:20.177300 systemd[1]: Finished systemd-udev-settle.service. Mar 17 18:41:20.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.179506 systemd[1]: Starting lvm2-activation-early.service... Mar 17 18:41:20.186307 lvm[1107]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:41:20.211645 systemd[1]: Finished lvm2-activation-early.service. Mar 17 18:41:20.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.212627 systemd[1]: Reached target cryptsetup.target. Mar 17 18:41:20.214379 systemd[1]: Starting lvm2-activation.service... Mar 17 18:41:20.217368 lvm[1109]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:41:20.247746 systemd[1]: Finished lvm2-activation.service. Mar 17 18:41:20.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.248706 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:41:20.249525 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 18:41:20.249551 systemd[1]: Reached target local-fs.target. Mar 17 18:41:20.250321 systemd[1]: Reached target machines.target. Mar 17 18:41:20.252137 systemd[1]: Starting ldconfig.service... Mar 17 18:41:20.253031 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:41:20.253074 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:41:20.253797 systemd[1]: Starting systemd-boot-update.service... Mar 17 18:41:20.255350 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Mar 17 18:41:20.257232 systemd[1]: Starting systemd-machine-id-commit.service... Mar 17 18:41:20.259434 systemd[1]: Starting systemd-sysext.service... Mar 17 18:41:20.260884 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1112 (bootctl) Mar 17 18:41:20.261835 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Mar 17 18:41:20.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.265023 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Mar 17 18:41:20.274123 systemd[1]: Unmounting usr-share-oem.mount... Mar 17 18:41:20.278056 systemd[1]: usr-share-oem.mount: Deactivated successfully. Mar 17 18:41:20.278259 systemd[1]: Unmounted usr-share-oem.mount. Mar 17 18:41:20.287899 kernel: loop0: detected capacity change from 0 to 210664 Mar 17 18:41:20.295398 systemd-fsck[1120]: fsck.fat 4.2 (2021-01-31) Mar 17 18:41:20.295398 systemd-fsck[1120]: /dev/vda1: 789 files, 119299/258078 clusters Mar 17 18:41:20.297360 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Mar 17 18:41:20.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.299949 systemd[1]: Mounting boot.mount... Mar 17 18:41:20.315577 systemd[1]: Mounted boot.mount. Mar 17 18:41:20.527913 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 18:41:20.530569 systemd[1]: Finished systemd-boot-update.service. Mar 17 18:41:20.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.539476 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 18:41:20.540093 systemd[1]: Finished systemd-machine-id-commit.service. Mar 17 18:41:20.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.545908 kernel: loop1: detected capacity change from 0 to 210664 Mar 17 18:41:20.549176 (sd-sysext)[1133]: Using extensions 'kubernetes'. Mar 17 18:41:20.549456 (sd-sysext)[1133]: Merged extensions into '/usr'. Mar 17 18:41:20.563727 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:41:20.565116 systemd[1]: Mounting usr-share-oem.mount... Mar 17 18:41:20.566135 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:41:20.567122 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:41:20.568918 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:41:20.570500 systemd[1]: Starting modprobe@loop.service... Mar 17 18:41:20.571818 ldconfig[1111]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 18:41:20.571306 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:41:20.571422 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:41:20.571517 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:41:20.574078 systemd[1]: Mounted usr-share-oem.mount. Mar 17 18:41:20.575317 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:41:20.575456 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:41:20.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.576629 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:41:20.576752 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:41:20.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.578051 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:41:20.578184 systemd[1]: Finished modprobe@loop.service. Mar 17 18:41:20.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.579594 systemd[1]: Finished ldconfig.service. Mar 17 18:41:20.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.580723 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:41:20.580813 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:41:20.582031 systemd[1]: Finished systemd-sysext.service. Mar 17 18:41:20.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.583917 systemd[1]: Starting ensure-sysext.service... Mar 17 18:41:20.585505 systemd[1]: Starting systemd-tmpfiles-setup.service... Mar 17 18:41:20.589787 systemd[1]: Reloading. Mar 17 18:41:20.593995 systemd-tmpfiles[1148]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Mar 17 18:41:20.594624 systemd-tmpfiles[1148]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 18:41:20.595935 systemd-tmpfiles[1148]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 18:41:20.636158 /usr/lib/systemd/system-generators/torcx-generator[1170]: time="2025-03-17T18:41:20Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:41:20.636554 /usr/lib/systemd/system-generators/torcx-generator[1170]: time="2025-03-17T18:41:20Z" level=info msg="torcx already run" Mar 17 18:41:20.704380 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:41:20.704398 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:41:20.721003 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:41:20.774940 systemd[1]: Finished systemd-tmpfiles-setup.service. Mar 17 18:41:20.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.778508 systemd[1]: Starting audit-rules.service... Mar 17 18:41:20.780280 systemd[1]: Starting clean-ca-certificates.service... Mar 17 18:41:20.782244 systemd[1]: Starting systemd-journal-catalog-update.service... Mar 17 18:41:20.784419 systemd[1]: Starting systemd-resolved.service... Mar 17 18:41:20.786419 systemd[1]: Starting systemd-timesyncd.service... Mar 17 18:41:20.788116 systemd[1]: Starting systemd-update-utmp.service... Mar 17 18:41:20.789747 systemd[1]: Finished clean-ca-certificates.service. Mar 17 18:41:20.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.791000 audit[1229]: SYSTEM_BOOT pid=1229 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Mar 17 18:41:20.795250 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:41:20.798172 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:41:20.798454 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:41:20.801349 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:41:20.803674 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:41:20.805888 systemd[1]: Starting modprobe@loop.service... Mar 17 18:41:20.806999 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:41:20.807230 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:41:20.807338 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:41:20.807420 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:41:20.808587 systemd[1]: Finished systemd-journal-catalog-update.service. Mar 17 18:41:20.810134 augenrules[1242]: No rules Mar 17 18:41:20.809000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Mar 17 18:41:20.809000 audit[1242]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffeebdeea00 a2=420 a3=0 items=0 ppid=1217 pid=1242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:20.809000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Mar 17 18:41:20.810787 systemd[1]: Finished systemd-update-utmp.service. Mar 17 18:41:20.812366 systemd[1]: Finished audit-rules.service. Mar 17 18:41:20.813742 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:41:20.814047 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:41:20.815527 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:41:20.815685 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:41:20.817156 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:41:20.817282 systemd[1]: Finished modprobe@loop.service. Mar 17 18:41:20.819347 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:41:20.819607 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:41:20.821040 systemd[1]: Starting systemd-update-done.service... Mar 17 18:41:20.823380 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:41:20.823811 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:41:20.824788 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:41:20.826662 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:41:20.828542 systemd[1]: Starting modprobe@loop.service... Mar 17 18:41:20.829502 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:41:20.829607 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:41:20.829688 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:41:20.829750 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:41:20.830564 systemd[1]: Finished systemd-update-done.service. Mar 17 18:41:20.831977 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:41:20.832213 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:41:20.833512 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:41:20.833740 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:41:20.835017 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:41:20.835258 systemd[1]: Finished modprobe@loop.service. Mar 17 18:41:20.836452 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:41:20.836703 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:41:20.839189 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:41:20.839401 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:41:20.840581 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:41:20.842439 systemd[1]: Starting modprobe@drm.service... Mar 17 18:41:20.844114 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:41:20.845922 systemd[1]: Starting modprobe@loop.service... Mar 17 18:41:20.846848 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:41:20.846976 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:41:20.848147 systemd[1]: Starting systemd-networkd-wait-online.service... Mar 17 18:41:20.849463 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:41:20.849563 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:41:20.850496 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:41:20.850624 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:41:20.852132 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:41:20.852240 systemd[1]: Finished modprobe@drm.service. Mar 17 18:41:20.853557 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:41:20.853725 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:41:20.855274 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:41:20.855542 systemd[1]: Finished modprobe@loop.service. Mar 17 18:41:20.856890 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:41:20.856968 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:41:20.858166 systemd[1]: Finished ensure-sysext.service. Mar 17 18:41:20.865623 systemd[1]: Started systemd-timesyncd.service. Mar 17 18:41:20.866500 systemd-resolved[1224]: Positive Trust Anchors: Mar 17 18:41:20.866510 systemd-resolved[1224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:41:20.866535 systemd-resolved[1224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:41:21.352719 systemd-timesyncd[1225]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 17 18:41:21.352754 systemd-timesyncd[1225]: Initial clock synchronization to Mon 2025-03-17 18:41:21.352659 UTC. Mar 17 18:41:21.353419 systemd[1]: Reached target time-set.target. Mar 17 18:41:21.359871 systemd-resolved[1224]: Defaulting to hostname 'linux'. Mar 17 18:41:21.361261 systemd[1]: Started systemd-resolved.service. Mar 17 18:41:21.362155 systemd[1]: Reached target network.target. Mar 17 18:41:21.362962 systemd[1]: Reached target nss-lookup.target. Mar 17 18:41:21.363827 systemd[1]: Reached target sysinit.target. Mar 17 18:41:21.364707 systemd[1]: Started motdgen.path. Mar 17 18:41:21.365446 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Mar 17 18:41:21.366676 systemd[1]: Started logrotate.timer. Mar 17 18:41:21.367508 systemd[1]: Started mdadm.timer. Mar 17 18:41:21.368222 systemd[1]: Started systemd-tmpfiles-clean.timer. Mar 17 18:41:21.369104 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 18:41:21.369130 systemd[1]: Reached target paths.target. Mar 17 18:41:21.369906 systemd[1]: Reached target timers.target. Mar 17 18:41:21.370984 systemd[1]: Listening on dbus.socket. Mar 17 18:41:21.372711 systemd[1]: Starting docker.socket... Mar 17 18:41:21.374827 systemd[1]: Listening on sshd.socket. Mar 17 18:41:21.375684 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:41:21.375927 systemd[1]: Listening on docker.socket. Mar 17 18:41:21.376718 systemd[1]: Reached target sockets.target. Mar 17 18:41:21.377524 systemd[1]: Reached target basic.target. Mar 17 18:41:21.378408 systemd[1]: System is tainted: cgroupsv1 Mar 17 18:41:21.378445 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:41:21.378462 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:41:21.379280 systemd[1]: Starting containerd.service... Mar 17 18:41:21.380886 systemd[1]: Starting dbus.service... Mar 17 18:41:21.382399 systemd[1]: Starting enable-oem-cloudinit.service... Mar 17 18:41:21.384023 systemd[1]: Starting extend-filesystems.service... Mar 17 18:41:21.385024 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Mar 17 18:41:21.386723 jq[1279]: false Mar 17 18:41:21.385962 systemd[1]: Starting motdgen.service... Mar 17 18:41:21.387944 systemd[1]: Starting prepare-helm.service... Mar 17 18:41:21.389808 systemd[1]: Starting ssh-key-proc-cmdline.service... Mar 17 18:41:21.391787 systemd[1]: Starting sshd-keygen.service... Mar 17 18:41:21.395289 systemd[1]: Starting systemd-logind.service... Mar 17 18:41:21.397230 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:41:21.397275 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 18:41:21.400263 systemd[1]: Starting update-engine.service... Mar 17 18:41:21.402477 systemd[1]: Starting update-ssh-keys-after-ignition.service... Mar 17 18:41:21.407830 dbus-daemon[1278]: [system] SELinux support is enabled Mar 17 18:41:21.405029 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 18:41:21.412765 jq[1302]: true Mar 17 18:41:21.405233 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Mar 17 18:41:21.405462 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 18:41:21.405645 systemd[1]: Finished motdgen.service. Mar 17 18:41:21.407433 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 18:41:21.407647 systemd[1]: Finished ssh-key-proc-cmdline.service. Mar 17 18:41:21.408746 systemd[1]: Started dbus.service. Mar 17 18:41:21.414101 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 18:41:21.414124 systemd[1]: Reached target system-config.target. Mar 17 18:41:21.414841 tar[1306]: linux-amd64/helm Mar 17 18:41:21.415093 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 18:41:21.415108 systemd[1]: Reached target user-config.target. Mar 17 18:41:21.417810 jq[1308]: true Mar 17 18:41:21.420651 extend-filesystems[1280]: Found loop1 Mar 17 18:41:21.420651 extend-filesystems[1280]: Found sr0 Mar 17 18:41:21.420651 extend-filesystems[1280]: Found vda Mar 17 18:41:21.420651 extend-filesystems[1280]: Found vda1 Mar 17 18:41:21.420651 extend-filesystems[1280]: Found vda2 Mar 17 18:41:21.420651 extend-filesystems[1280]: Found vda3 Mar 17 18:41:21.420651 extend-filesystems[1280]: Found usr Mar 17 18:41:21.420651 extend-filesystems[1280]: Found vda4 Mar 17 18:41:21.420651 extend-filesystems[1280]: Found vda6 Mar 17 18:41:21.420651 extend-filesystems[1280]: Found vda7 Mar 17 18:41:21.420651 extend-filesystems[1280]: Found vda9 Mar 17 18:41:21.420651 extend-filesystems[1280]: Checking size of /dev/vda9 Mar 17 18:41:21.441902 extend-filesystems[1280]: Resized partition /dev/vda9 Mar 17 18:41:21.446086 extend-filesystems[1337]: resize2fs 1.46.5 (30-Dec-2021) Mar 17 18:41:21.455893 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 17 18:41:21.458741 env[1309]: time="2025-03-17T18:41:21.458685837Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Mar 17 18:41:21.468823 update_engine[1300]: I0317 18:41:21.468704 1300 main.cc:92] Flatcar Update Engine starting Mar 17 18:41:21.476755 update_engine[1300]: I0317 18:41:21.470430 1300 update_check_scheduler.cc:74] Next update check in 11m36s Mar 17 18:41:21.470397 systemd[1]: Started update-engine.service. Mar 17 18:41:21.472647 systemd[1]: Started locksmithd.service. Mar 17 18:41:21.476516 systemd-logind[1293]: Watching system buttons on /dev/input/event1 (Power Button) Mar 17 18:41:21.476531 systemd-logind[1293]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 18:41:21.477649 systemd-logind[1293]: New seat seat0. Mar 17 18:41:21.482530 bash[1334]: Updated "/home/core/.ssh/authorized_keys" Mar 17 18:41:21.482306 systemd[1]: Started systemd-logind.service. Mar 17 18:41:21.483983 systemd[1]: Finished update-ssh-keys-after-ignition.service. Mar 17 18:41:21.487700 env[1309]: time="2025-03-17T18:41:21.487665704Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 18:41:21.488196 env[1309]: time="2025-03-17T18:41:21.488179057Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:41:21.491870 env[1309]: time="2025-03-17T18:41:21.489744943Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.179-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:41:21.491870 env[1309]: time="2025-03-17T18:41:21.489788845Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:41:21.491870 env[1309]: time="2025-03-17T18:41:21.490046538Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:41:21.491870 env[1309]: time="2025-03-17T18:41:21.490061446Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 18:41:21.491870 env[1309]: time="2025-03-17T18:41:21.490073278Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Mar 17 18:41:21.491870 env[1309]: time="2025-03-17T18:41:21.490082956Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 18:41:21.491870 env[1309]: time="2025-03-17T18:41:21.490143881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:41:21.491870 env[1309]: time="2025-03-17T18:41:21.490329018Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:41:21.491870 env[1309]: time="2025-03-17T18:41:21.490459703Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:41:21.491870 env[1309]: time="2025-03-17T18:41:21.490473739Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 18:41:21.493664 env[1309]: time="2025-03-17T18:41:21.490513834Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Mar 17 18:41:21.493664 env[1309]: time="2025-03-17T18:41:21.490523082Z" level=info msg="metadata content store policy set" policy=shared Mar 17 18:41:21.494920 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 17 18:41:21.531742 locksmithd[1339]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 18:41:21.543851 extend-filesystems[1337]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 17 18:41:21.543851 extend-filesystems[1337]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 17 18:41:21.543851 extend-filesystems[1337]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 17 18:41:21.548975 extend-filesystems[1280]: Resized filesystem in /dev/vda9 Mar 17 18:41:21.544256 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 18:41:21.544484 systemd[1]: Finished extend-filesystems.service. Mar 17 18:41:21.553868 env[1309]: time="2025-03-17T18:41:21.550940895Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 18:41:21.553868 env[1309]: time="2025-03-17T18:41:21.550976862Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 18:41:21.553868 env[1309]: time="2025-03-17T18:41:21.550988274Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 18:41:21.553868 env[1309]: time="2025-03-17T18:41:21.551022488Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 18:41:21.553868 env[1309]: time="2025-03-17T18:41:21.551038127Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 18:41:21.553868 env[1309]: time="2025-03-17T18:41:21.551050651Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 18:41:21.553868 env[1309]: time="2025-03-17T18:41:21.551061902Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 18:41:21.553868 env[1309]: time="2025-03-17T18:41:21.551074135Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 18:41:21.553868 env[1309]: time="2025-03-17T18:41:21.551085496Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Mar 17 18:41:21.553868 env[1309]: time="2025-03-17T18:41:21.551098040Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 18:41:21.553868 env[1309]: time="2025-03-17T18:41:21.551110693Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 18:41:21.553868 env[1309]: time="2025-03-17T18:41:21.551120592Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 18:41:21.553868 env[1309]: time="2025-03-17T18:41:21.551219598Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 18:41:21.553868 env[1309]: time="2025-03-17T18:41:21.551280342Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 18:41:21.552933 systemd[1]: Started containerd.service. Mar 17 18:41:21.554319 env[1309]: time="2025-03-17T18:41:21.551563442Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 18:41:21.554319 env[1309]: time="2025-03-17T18:41:21.551583891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 18:41:21.554319 env[1309]: time="2025-03-17T18:41:21.551594430Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 18:41:21.554319 env[1309]: time="2025-03-17T18:41:21.551633714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 18:41:21.554319 env[1309]: time="2025-03-17T18:41:21.551645847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 18:41:21.554319 env[1309]: time="2025-03-17T18:41:21.551672787Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 18:41:21.554319 env[1309]: time="2025-03-17T18:41:21.551682586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 18:41:21.554319 env[1309]: time="2025-03-17T18:41:21.551693135Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 18:41:21.554319 env[1309]: time="2025-03-17T18:41:21.551704437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 18:41:21.554319 env[1309]: time="2025-03-17T18:41:21.551714496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 18:41:21.554319 env[1309]: time="2025-03-17T18:41:21.551724635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 18:41:21.554319 env[1309]: time="2025-03-17T18:41:21.551735725Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 18:41:21.554319 env[1309]: time="2025-03-17T18:41:21.551827257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 18:41:21.554319 env[1309]: time="2025-03-17T18:41:21.551840792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 18:41:21.554319 env[1309]: time="2025-03-17T18:41:21.551851282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 18:41:21.554599 env[1309]: time="2025-03-17T18:41:21.551878904Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 18:41:21.554599 env[1309]: time="2025-03-17T18:41:21.551891908Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Mar 17 18:41:21.554599 env[1309]: time="2025-03-17T18:41:21.551902047Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 18:41:21.554599 env[1309]: time="2025-03-17T18:41:21.551918087Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Mar 17 18:41:21.554599 env[1309]: time="2025-03-17T18:41:21.551952542Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 18:41:21.554695 env[1309]: time="2025-03-17T18:41:21.552115708Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 18:41:21.554695 env[1309]: time="2025-03-17T18:41:21.552159961Z" level=info msg="Connect containerd service" Mar 17 18:41:21.554695 env[1309]: time="2025-03-17T18:41:21.552188274Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 18:41:21.554695 env[1309]: time="2025-03-17T18:41:21.552588895Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:41:21.554695 env[1309]: time="2025-03-17T18:41:21.552761809Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 18:41:21.554695 env[1309]: time="2025-03-17T18:41:21.552790152Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 18:41:21.554695 env[1309]: time="2025-03-17T18:41:21.552824176Z" level=info msg="containerd successfully booted in 0.098767s" Mar 17 18:41:21.558479 env[1309]: time="2025-03-17T18:41:21.554808126Z" level=info msg="Start subscribing containerd event" Mar 17 18:41:21.558479 env[1309]: time="2025-03-17T18:41:21.554865223Z" level=info msg="Start recovering state" Mar 17 18:41:21.558479 env[1309]: time="2025-03-17T18:41:21.554911109Z" level=info msg="Start event monitor" Mar 17 18:41:21.558479 env[1309]: time="2025-03-17T18:41:21.554920397Z" level=info msg="Start snapshots syncer" Mar 17 18:41:21.558479 env[1309]: time="2025-03-17T18:41:21.554927790Z" level=info msg="Start cni network conf syncer for default" Mar 17 18:41:21.558479 env[1309]: time="2025-03-17T18:41:21.554933541Z" level=info msg="Start streaming server" Mar 17 18:41:21.571142 systemd-networkd[1085]: eth0: Gained IPv6LL Mar 17 18:41:21.572687 systemd[1]: Finished systemd-networkd-wait-online.service. Mar 17 18:41:21.574021 systemd[1]: Reached target network-online.target. Mar 17 18:41:21.576391 systemd[1]: Starting kubelet.service... Mar 17 18:41:21.739944 sshd_keygen[1299]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 18:41:21.756723 systemd[1]: Finished sshd-keygen.service. Mar 17 18:41:21.758945 systemd[1]: Starting issuegen.service... Mar 17 18:41:21.764237 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 18:41:21.764408 systemd[1]: Finished issuegen.service. Mar 17 18:41:21.766342 systemd[1]: Starting systemd-user-sessions.service... Mar 17 18:41:21.771735 systemd[1]: Finished systemd-user-sessions.service. Mar 17 18:41:21.773829 systemd[1]: Started getty@tty1.service. Mar 17 18:41:21.775548 systemd[1]: Started serial-getty@ttyS0.service. Mar 17 18:41:21.777490 systemd[1]: Reached target getty.target. Mar 17 18:41:21.860615 tar[1306]: linux-amd64/LICENSE Mar 17 18:41:21.860615 tar[1306]: linux-amd64/README.md Mar 17 18:41:21.864573 systemd[1]: Finished prepare-helm.service. Mar 17 18:41:22.119129 systemd[1]: Started kubelet.service. Mar 17 18:41:22.120317 systemd[1]: Reached target multi-user.target. Mar 17 18:41:22.122319 systemd[1]: Starting systemd-update-utmp-runlevel.service... Mar 17 18:41:22.128249 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Mar 17 18:41:22.128458 systemd[1]: Finished systemd-update-utmp-runlevel.service. Mar 17 18:41:22.130191 systemd[1]: Startup finished in 4.884s (kernel) + 4.703s (userspace) = 9.588s. Mar 17 18:41:22.539306 kubelet[1379]: E0317 18:41:22.539200 1379 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:41:22.541190 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:41:22.541319 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:41:30.999806 systemd[1]: Created slice system-sshd.slice. Mar 17 18:41:31.000885 systemd[1]: Started sshd@0-10.0.0.81:22-10.0.0.1:41764.service. Mar 17 18:41:31.032736 sshd[1390]: Accepted publickey for core from 10.0.0.1 port 41764 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:41:31.033899 sshd[1390]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:41:31.041983 systemd-logind[1293]: New session 1 of user core. Mar 17 18:41:31.042812 systemd[1]: Created slice user-500.slice. Mar 17 18:41:31.043669 systemd[1]: Starting user-runtime-dir@500.service... Mar 17 18:41:31.050774 systemd[1]: Finished user-runtime-dir@500.service. Mar 17 18:41:31.051821 systemd[1]: Starting user@500.service... Mar 17 18:41:31.054184 (systemd)[1394]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:41:31.114742 systemd[1394]: Queued start job for default target default.target. Mar 17 18:41:31.114969 systemd[1394]: Reached target paths.target. Mar 17 18:41:31.114988 systemd[1394]: Reached target sockets.target. Mar 17 18:41:31.115003 systemd[1394]: Reached target timers.target. Mar 17 18:41:31.115016 systemd[1394]: Reached target basic.target. Mar 17 18:41:31.115062 systemd[1394]: Reached target default.target. Mar 17 18:41:31.115089 systemd[1394]: Startup finished in 56ms. Mar 17 18:41:31.115145 systemd[1]: Started user@500.service. Mar 17 18:41:31.116023 systemd[1]: Started session-1.scope. Mar 17 18:41:31.164578 systemd[1]: Started sshd@1-10.0.0.81:22-10.0.0.1:41774.service. Mar 17 18:41:31.193817 sshd[1404]: Accepted publickey for core from 10.0.0.1 port 41774 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:41:31.195261 sshd[1404]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:41:31.199548 systemd-logind[1293]: New session 2 of user core. Mar 17 18:41:31.200252 systemd[1]: Started session-2.scope. Mar 17 18:41:31.254864 sshd[1404]: pam_unix(sshd:session): session closed for user core Mar 17 18:41:31.257189 systemd[1]: Started sshd@2-10.0.0.81:22-10.0.0.1:41784.service. Mar 17 18:41:31.257584 systemd[1]: sshd@1-10.0.0.81:22-10.0.0.1:41774.service: Deactivated successfully. Mar 17 18:41:31.258492 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 18:41:31.258545 systemd-logind[1293]: Session 2 logged out. Waiting for processes to exit. Mar 17 18:41:31.259416 systemd-logind[1293]: Removed session 2. Mar 17 18:41:31.287362 sshd[1409]: Accepted publickey for core from 10.0.0.1 port 41784 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:41:31.288524 sshd[1409]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:41:31.291805 systemd-logind[1293]: New session 3 of user core. Mar 17 18:41:31.292570 systemd[1]: Started session-3.scope. Mar 17 18:41:31.341924 sshd[1409]: pam_unix(sshd:session): session closed for user core Mar 17 18:41:31.344987 systemd[1]: Started sshd@3-10.0.0.81:22-10.0.0.1:41800.service. Mar 17 18:41:31.345616 systemd[1]: sshd@2-10.0.0.81:22-10.0.0.1:41784.service: Deactivated successfully. Mar 17 18:41:31.347051 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 18:41:31.347524 systemd-logind[1293]: Session 3 logged out. Waiting for processes to exit. Mar 17 18:41:31.348510 systemd-logind[1293]: Removed session 3. Mar 17 18:41:31.375028 sshd[1417]: Accepted publickey for core from 10.0.0.1 port 41800 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:41:31.376221 sshd[1417]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:41:31.379543 systemd-logind[1293]: New session 4 of user core. Mar 17 18:41:31.380319 systemd[1]: Started session-4.scope. Mar 17 18:41:31.433776 sshd[1417]: pam_unix(sshd:session): session closed for user core Mar 17 18:41:31.436119 systemd[1]: Started sshd@4-10.0.0.81:22-10.0.0.1:41810.service. Mar 17 18:41:31.436771 systemd[1]: sshd@3-10.0.0.81:22-10.0.0.1:41800.service: Deactivated successfully. Mar 17 18:41:31.437834 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 18:41:31.438192 systemd-logind[1293]: Session 4 logged out. Waiting for processes to exit. Mar 17 18:41:31.439252 systemd-logind[1293]: Removed session 4. Mar 17 18:41:31.466108 sshd[1423]: Accepted publickey for core from 10.0.0.1 port 41810 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:41:31.467146 sshd[1423]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:41:31.470405 systemd-logind[1293]: New session 5 of user core. Mar 17 18:41:31.471230 systemd[1]: Started session-5.scope. Mar 17 18:41:31.525013 sudo[1429]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 18:41:31.525211 sudo[1429]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Mar 17 18:41:31.532055 dbus-daemon[1278]: \xd0\u000d\xbbJ\u000dV: received setenforce notice (enforcing=-1101720192) Mar 17 18:41:31.534096 sudo[1429]: pam_unix(sudo:session): session closed for user root Mar 17 18:41:31.535490 sshd[1423]: pam_unix(sshd:session): session closed for user core Mar 17 18:41:31.537819 systemd[1]: Started sshd@5-10.0.0.81:22-10.0.0.1:41822.service. Mar 17 18:41:31.538821 systemd[1]: sshd@4-10.0.0.81:22-10.0.0.1:41810.service: Deactivated successfully. Mar 17 18:41:31.539923 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 18:41:31.540092 systemd-logind[1293]: Session 5 logged out. Waiting for processes to exit. Mar 17 18:41:31.540957 systemd-logind[1293]: Removed session 5. Mar 17 18:41:31.567192 sshd[1431]: Accepted publickey for core from 10.0.0.1 port 41822 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:41:31.568162 sshd[1431]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:41:31.571417 systemd-logind[1293]: New session 6 of user core. Mar 17 18:41:31.572174 systemd[1]: Started session-6.scope. Mar 17 18:41:31.624179 sudo[1438]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 18:41:31.624381 sudo[1438]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Mar 17 18:41:31.626891 sudo[1438]: pam_unix(sudo:session): session closed for user root Mar 17 18:41:31.630941 sudo[1437]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 17 18:41:31.631145 sudo[1437]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Mar 17 18:41:31.639341 systemd[1]: Stopping audit-rules.service... Mar 17 18:41:31.639000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Mar 17 18:41:31.640820 auditctl[1441]: No rules Mar 17 18:41:31.641157 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 18:41:31.641413 systemd[1]: Stopped audit-rules.service. Mar 17 18:41:31.641878 kernel: kauditd_printk_skb: 167 callbacks suppressed Mar 17 18:41:31.641920 kernel: audit: type=1305 audit(1742236891.639:135): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Mar 17 18:41:31.639000 audit[1441]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff6fdb89c0 a2=420 a3=0 items=0 ppid=1 pid=1441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:31.642944 systemd[1]: Starting audit-rules.service... Mar 17 18:41:31.648849 kernel: audit: type=1300 audit(1742236891.639:135): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff6fdb89c0 a2=420 a3=0 items=0 ppid=1 pid=1441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:31.648911 kernel: audit: type=1327 audit(1742236891.639:135): proctitle=2F7362696E2F617564697463746C002D44 Mar 17 18:41:31.639000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Mar 17 18:41:31.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:31.653823 kernel: audit: type=1131 audit(1742236891.639:136): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:31.659962 augenrules[1459]: No rules Mar 17 18:41:31.660821 systemd[1]: Finished audit-rules.service. Mar 17 18:41:31.661927 sudo[1437]: pam_unix(sudo:session): session closed for user root Mar 17 18:41:31.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:31.663148 sshd[1431]: pam_unix(sshd:session): session closed for user core Mar 17 18:41:31.666272 systemd[1]: Started sshd@6-10.0.0.81:22-10.0.0.1:41824.service. Mar 17 18:41:31.670544 kernel: audit: type=1130 audit(1742236891.660:137): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:31.670595 kernel: audit: type=1106 audit(1742236891.661:138): pid=1437 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Mar 17 18:41:31.661000 audit[1437]: USER_END pid=1437 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Mar 17 18:41:31.666643 systemd[1]: sshd@5-10.0.0.81:22-10.0.0.1:41822.service: Deactivated successfully. Mar 17 18:41:31.667698 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 18:41:31.668321 systemd-logind[1293]: Session 6 logged out. Waiting for processes to exit. Mar 17 18:41:31.669357 systemd-logind[1293]: Removed session 6. Mar 17 18:41:31.661000 audit[1437]: CRED_DISP pid=1437 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Mar 17 18:41:31.661000 audit[1431]: USER_END pid=1431 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:41:31.678494 kernel: audit: type=1104 audit(1742236891.661:139): pid=1437 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Mar 17 18:41:31.678546 kernel: audit: type=1106 audit(1742236891.661:140): pid=1431 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:41:31.678568 kernel: audit: type=1104 audit(1742236891.661:141): pid=1431 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:41:31.661000 audit[1431]: CRED_DISP pid=1431 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:41:31.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.81:22-10.0.0.1:41824 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:31.685187 kernel: audit: type=1130 audit(1742236891.665:142): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.81:22-10.0.0.1:41824 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:31.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.81:22-10.0.0.1:41822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:31.701000 audit[1465]: USER_ACCT pid=1465 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:41:31.702754 sshd[1465]: Accepted publickey for core from 10.0.0.1 port 41824 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:41:31.702000 audit[1465]: CRED_ACQ pid=1465 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:41:31.702000 audit[1465]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd9297b0e0 a2=3 a3=0 items=0 ppid=1 pid=1465 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:31.702000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:41:31.703911 sshd[1465]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:41:31.707039 systemd-logind[1293]: New session 7 of user core. Mar 17 18:41:31.707749 systemd[1]: Started session-7.scope. Mar 17 18:41:31.710000 audit[1465]: USER_START pid=1465 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:41:31.711000 audit[1469]: CRED_ACQ pid=1469 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:41:31.757000 audit[1470]: USER_ACCT pid=1470 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Mar 17 18:41:31.758666 sudo[1470]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 18:41:31.757000 audit[1470]: CRED_REFR pid=1470 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Mar 17 18:41:31.758933 sudo[1470]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Mar 17 18:41:31.759000 audit[1470]: USER_START pid=1470 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Mar 17 18:41:31.778487 systemd[1]: Starting docker.service... Mar 17 18:41:31.811199 env[1482]: time="2025-03-17T18:41:31.811156627Z" level=info msg="Starting up" Mar 17 18:41:31.812438 env[1482]: time="2025-03-17T18:41:31.812400469Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 18:41:31.812438 env[1482]: time="2025-03-17T18:41:31.812426598Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 18:41:31.812510 env[1482]: time="2025-03-17T18:41:31.812445043Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 18:41:31.812510 env[1482]: time="2025-03-17T18:41:31.812454931Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 18:41:31.813751 env[1482]: time="2025-03-17T18:41:31.813726435Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 18:41:31.813751 env[1482]: time="2025-03-17T18:41:31.813741824Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 18:41:31.813751 env[1482]: time="2025-03-17T18:41:31.813751833Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 18:41:31.813839 env[1482]: time="2025-03-17T18:41:31.813758756Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 18:41:32.615403 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 18:41:32.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:32.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:32.615591 systemd[1]: Stopped kubelet.service. Mar 17 18:41:32.616821 systemd[1]: Starting kubelet.service... Mar 17 18:41:32.619517 env[1482]: time="2025-03-17T18:41:32.619061228Z" level=warning msg="Your kernel does not support cgroup blkio weight" Mar 17 18:41:32.619517 env[1482]: time="2025-03-17T18:41:32.619081886Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Mar 17 18:41:32.619517 env[1482]: time="2025-03-17T18:41:32.619246876Z" level=info msg="Loading containers: start." Mar 17 18:41:32.674000 audit[1519]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1519 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:41:32.674000 audit[1519]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffee13db840 a2=0 a3=7ffee13db82c items=0 ppid=1482 pid=1519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:32.674000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Mar 17 18:41:32.675000 audit[1521]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1521 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:41:32.675000 audit[1521]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc43eeccc0 a2=0 a3=7ffc43eeccac items=0 ppid=1482 pid=1521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:32.675000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Mar 17 18:41:32.677000 audit[1523]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1523 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:41:32.677000 audit[1523]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffcf791c5a0 a2=0 a3=7ffcf791c58c items=0 ppid=1482 pid=1523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:32.677000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Mar 17 18:41:32.679000 audit[1525]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1525 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:41:32.679000 audit[1525]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffcd7ebcc0 a2=0 a3=7fffcd7ebcac items=0 ppid=1482 pid=1525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:32.679000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Mar 17 18:41:32.691421 systemd[1]: Started kubelet.service. Mar 17 18:41:32.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:32.681000 audit[1527]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1527 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:41:32.681000 audit[1527]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fffb6f5d570 a2=0 a3=7fffb6f5d55c items=0 ppid=1482 pid=1527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:32.681000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Mar 17 18:41:32.704000 audit[1544]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1544 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:41:32.704000 audit[1544]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdfdcf57b0 a2=0 a3=7ffdfdcf579c items=0 ppid=1482 pid=1544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:32.704000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Mar 17 18:41:33.015000 audit[1547]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1547 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:41:33.015000 audit[1547]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffed4dafd40 a2=0 a3=7ffed4dafd2c items=0 ppid=1482 pid=1547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:33.015000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Mar 17 18:41:33.017000 audit[1549]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1549 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:41:33.017000 audit[1549]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffd8b65c090 a2=0 a3=7ffd8b65c07c items=0 ppid=1482 pid=1549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:33.017000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Mar 17 18:41:33.019000 audit[1551]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1551 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:41:33.019000 audit[1551]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffea6317570 a2=0 a3=7ffea631755c items=0 ppid=1482 pid=1551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:33.019000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Mar 17 18:41:33.027000 audit[1555]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1555 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:41:33.027000 audit[1555]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffd11d76f10 a2=0 a3=7ffd11d76efc items=0 ppid=1482 pid=1555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:33.027000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Mar 17 18:41:33.031886 kubelet[1533]: E0317 18:41:33.031482 1533 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:41:33.032000 audit[1556]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1556 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:41:33.032000 audit[1556]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe0a4830d0 a2=0 a3=7ffe0a4830bc items=0 ppid=1482 pid=1556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:33.032000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Mar 17 18:41:33.034040 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:41:33.034163 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:41:33.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Mar 17 18:41:33.041887 kernel: Initializing XFRM netlink socket Mar 17 18:41:33.067666 env[1482]: time="2025-03-17T18:41:33.067629022Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Mar 17 18:41:33.085000 audit[1565]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1565 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:41:33.085000 audit[1565]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffd6397e880 a2=0 a3=7ffd6397e86c items=0 ppid=1482 pid=1565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:33.085000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Mar 17 18:41:33.098000 audit[1568]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1568 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:41:33.098000 audit[1568]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffe849fe490 a2=0 a3=7ffe849fe47c items=0 ppid=1482 pid=1568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:33.098000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Mar 17 18:41:33.101000 audit[1571]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1571 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:41:33.101000 audit[1571]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffe2df98070 a2=0 a3=7ffe2df9805c items=0 ppid=1482 pid=1571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:33.101000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Mar 17 18:41:33.102000 audit[1573]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1573 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:41:33.102000 audit[1573]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffdbbea1ed0 a2=0 a3=7ffdbbea1ebc items=0 ppid=1482 pid=1573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:33.102000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Mar 17 18:41:33.104000 audit[1575]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1575 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:41:33.104000 audit[1575]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffedc7ac470 a2=0 a3=7ffedc7ac45c items=0 ppid=1482 pid=1575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:33.104000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Mar 17 18:41:33.106000 audit[1577]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1577 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:41:33.106000 audit[1577]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffdb2e46e10 a2=0 a3=7ffdb2e46dfc items=0 ppid=1482 pid=1577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:33.106000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Mar 17 18:41:33.107000 audit[1579]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1579 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:41:33.107000 audit[1579]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffedc5e6bd0 a2=0 a3=7ffedc5e6bbc items=0 ppid=1482 pid=1579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:33.107000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Mar 17 18:41:33.113000 audit[1582]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1582 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:41:33.113000 audit[1582]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffe0265ab60 a2=0 a3=7ffe0265ab4c items=0 ppid=1482 pid=1582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:33.113000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Mar 17 18:41:33.115000 audit[1584]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1584 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:41:33.115000 audit[1584]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffe6b813a90 a2=0 a3=7ffe6b813a7c items=0 ppid=1482 pid=1584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:33.115000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Mar 17 18:41:33.116000 audit[1586]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1586 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:41:33.116000 audit[1586]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fff517f2ea0 a2=0 a3=7fff517f2e8c items=0 ppid=1482 pid=1586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:33.116000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Mar 17 18:41:33.118000 audit[1588]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1588 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:41:33.118000 audit[1588]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fffc0b49b50 a2=0 a3=7fffc0b49b3c items=0 ppid=1482 pid=1588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:33.118000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Mar 17 18:41:33.119700 systemd-networkd[1085]: docker0: Link UP Mar 17 18:41:33.126000 audit[1592]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1592 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:41:33.126000 audit[1592]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcac30fef0 a2=0 a3=7ffcac30fedc items=0 ppid=1482 pid=1592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:33.126000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Mar 17 18:41:33.131000 audit[1593]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1593 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:41:33.131000 audit[1593]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd0124df00 a2=0 a3=7ffd0124deec items=0 ppid=1482 pid=1593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:33.131000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Mar 17 18:41:33.133043 env[1482]: time="2025-03-17T18:41:33.133008719Z" level=info msg="Loading containers: done." Mar 17 18:41:33.147287 env[1482]: time="2025-03-17T18:41:33.147238430Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 18:41:33.147409 env[1482]: time="2025-03-17T18:41:33.147389463Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Mar 17 18:41:33.147475 env[1482]: time="2025-03-17T18:41:33.147456168Z" level=info msg="Daemon has completed initialization" Mar 17 18:41:33.163498 systemd[1]: Started docker.service. Mar 17 18:41:33.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:33.167106 env[1482]: time="2025-03-17T18:41:33.167061799Z" level=info msg="API listen on /run/docker.sock" Mar 17 18:41:34.256644 env[1309]: time="2025-03-17T18:41:34.256605863Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 17 18:41:34.860512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2287947515.mount: Deactivated successfully. Mar 17 18:41:36.435979 env[1309]: time="2025-03-17T18:41:36.435908936Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:36.437723 env[1309]: time="2025-03-17T18:41:36.437660110Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:36.439378 env[1309]: time="2025-03-17T18:41:36.439352423Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:36.440925 env[1309]: time="2025-03-17T18:41:36.440896879Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:36.441595 env[1309]: time="2025-03-17T18:41:36.441548340Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\"" Mar 17 18:41:36.449906 env[1309]: time="2025-03-17T18:41:36.449872237Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 17 18:41:38.581607 env[1309]: time="2025-03-17T18:41:38.581547348Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:38.598082 env[1309]: time="2025-03-17T18:41:38.598044420Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:38.610650 env[1309]: time="2025-03-17T18:41:38.610606103Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:38.619424 env[1309]: time="2025-03-17T18:41:38.619378912Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:38.620172 env[1309]: time="2025-03-17T18:41:38.620131924Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\"" Mar 17 18:41:38.629531 env[1309]: time="2025-03-17T18:41:38.629499979Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 17 18:41:40.166554 env[1309]: time="2025-03-17T18:41:40.166472392Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:40.168140 env[1309]: time="2025-03-17T18:41:40.168100124Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:40.169827 env[1309]: time="2025-03-17T18:41:40.169774203Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:40.171208 env[1309]: time="2025-03-17T18:41:40.171169869Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:40.171850 env[1309]: time="2025-03-17T18:41:40.171816963Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\"" Mar 17 18:41:40.180172 env[1309]: time="2025-03-17T18:41:40.180146921Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 17 18:41:41.540603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount811012877.mount: Deactivated successfully. Mar 17 18:41:42.390731 env[1309]: time="2025-03-17T18:41:42.390667276Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:42.392496 env[1309]: time="2025-03-17T18:41:42.392431124Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:42.393751 env[1309]: time="2025-03-17T18:41:42.393724138Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:42.395088 env[1309]: time="2025-03-17T18:41:42.395067797Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:42.395474 env[1309]: time="2025-03-17T18:41:42.395455204Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\"" Mar 17 18:41:42.403058 env[1309]: time="2025-03-17T18:41:42.403026400Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 18:41:42.888758 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4100109361.mount: Deactivated successfully. Mar 17 18:41:43.284973 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 18:41:43.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:43.285154 systemd[1]: Stopped kubelet.service. Mar 17 18:41:43.286243 kernel: kauditd_printk_skb: 88 callbacks suppressed Mar 17 18:41:43.286291 kernel: audit: type=1130 audit(1742236903.284:181): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:43.286438 systemd[1]: Starting kubelet.service... Mar 17 18:41:43.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:43.292285 kernel: audit: type=1131 audit(1742236903.284:182): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:43.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:43.359699 systemd[1]: Started kubelet.service. Mar 17 18:41:43.363882 kernel: audit: type=1130 audit(1742236903.359:183): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:43.395505 kubelet[1674]: E0317 18:41:43.395451 1674 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:41:43.397565 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:41:43.397695 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:41:43.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Mar 17 18:41:43.401885 kernel: audit: type=1131 audit(1742236903.397:184): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Mar 17 18:41:43.963515 env[1309]: time="2025-03-17T18:41:43.963448741Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:43.965468 env[1309]: time="2025-03-17T18:41:43.965404198Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:43.967116 env[1309]: time="2025-03-17T18:41:43.967093024Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:43.968745 env[1309]: time="2025-03-17T18:41:43.968723111Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:43.969323 env[1309]: time="2025-03-17T18:41:43.969289513Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Mar 17 18:41:43.977529 env[1309]: time="2025-03-17T18:41:43.977496160Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 17 18:41:44.487034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount913979590.mount: Deactivated successfully. Mar 17 18:41:44.492610 env[1309]: time="2025-03-17T18:41:44.492567539Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:44.494461 env[1309]: time="2025-03-17T18:41:44.494407910Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:44.495670 env[1309]: time="2025-03-17T18:41:44.495643326Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:44.496901 env[1309]: time="2025-03-17T18:41:44.496879043Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:44.497339 env[1309]: time="2025-03-17T18:41:44.497308448Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Mar 17 18:41:44.506022 env[1309]: time="2025-03-17T18:41:44.505996488Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 17 18:41:45.068903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2398397375.mount: Deactivated successfully. Mar 17 18:41:47.752738 env[1309]: time="2025-03-17T18:41:47.752684157Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:47.754447 env[1309]: time="2025-03-17T18:41:47.754421875Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:47.756155 env[1309]: time="2025-03-17T18:41:47.756127503Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:47.757708 env[1309]: time="2025-03-17T18:41:47.757683470Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:47.758307 env[1309]: time="2025-03-17T18:41:47.758285589Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Mar 17 18:41:49.837528 systemd[1]: Stopped kubelet.service. Mar 17 18:41:49.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:49.838000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:49.841204 systemd[1]: Starting kubelet.service... Mar 17 18:41:49.846872 kernel: audit: type=1130 audit(1742236909.836:185): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:49.846962 kernel: audit: type=1131 audit(1742236909.838:186): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:49.856079 systemd[1]: Reloading. Mar 17 18:41:49.904703 /usr/lib/systemd/system-generators/torcx-generator[1800]: time="2025-03-17T18:41:49Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:41:49.905059 /usr/lib/systemd/system-generators/torcx-generator[1800]: time="2025-03-17T18:41:49Z" level=info msg="torcx already run" Mar 17 18:41:50.097362 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:41:50.097377 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:41:50.113810 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:41:50.181091 systemd[1]: Started kubelet.service. Mar 17 18:41:50.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:50.185060 systemd[1]: Stopping kubelet.service... Mar 17 18:41:50.185534 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:41:50.185732 systemd[1]: Stopped kubelet.service. Mar 17 18:41:50.189094 kernel: audit: type=1130 audit(1742236910.180:187): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:50.189155 kernel: audit: type=1131 audit(1742236910.184:188): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:50.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:50.187044 systemd[1]: Starting kubelet.service... Mar 17 18:41:50.258067 systemd[1]: Started kubelet.service. Mar 17 18:41:50.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:50.263927 kernel: audit: type=1130 audit(1742236910.257:189): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:50.301159 kubelet[1863]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:41:50.301159 kubelet[1863]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:41:50.301159 kubelet[1863]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:41:50.301534 kubelet[1863]: I0317 18:41:50.301188 1863 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:41:50.535081 kubelet[1863]: I0317 18:41:50.534976 1863 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 18:41:50.535081 kubelet[1863]: I0317 18:41:50.535013 1863 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:41:50.535265 kubelet[1863]: I0317 18:41:50.535243 1863 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 18:41:50.545261 kubelet[1863]: I0317 18:41:50.545220 1863 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:41:50.545681 kubelet[1863]: E0317 18:41:50.545661 1863 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.81:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.81:6443: connect: connection refused Mar 17 18:41:50.555302 kubelet[1863]: I0317 18:41:50.555274 1863 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:41:50.556975 kubelet[1863]: I0317 18:41:50.556939 1863 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:41:50.557146 kubelet[1863]: I0317 18:41:50.556969 1863 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 18:41:50.557531 kubelet[1863]: I0317 18:41:50.557513 1863 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:41:50.557531 kubelet[1863]: I0317 18:41:50.557528 1863 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 18:41:50.557638 kubelet[1863]: I0317 18:41:50.557620 1863 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:41:50.558210 kubelet[1863]: I0317 18:41:50.558193 1863 kubelet.go:400] "Attempting to sync node with API server" Mar 17 18:41:50.558210 kubelet[1863]: I0317 18:41:50.558209 1863 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:41:50.558265 kubelet[1863]: I0317 18:41:50.558228 1863 kubelet.go:312] "Adding apiserver pod source" Mar 17 18:41:50.558265 kubelet[1863]: I0317 18:41:50.558243 1863 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:41:50.563497 kubelet[1863]: W0317 18:41:50.563441 1863 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Mar 17 18:41:50.563497 kubelet[1863]: E0317 18:41:50.563495 1863 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Mar 17 18:41:50.571175 kubelet[1863]: W0317 18:41:50.571137 1863 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.81:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Mar 17 18:41:50.571175 kubelet[1863]: E0317 18:41:50.571170 1863 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.81:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Mar 17 18:41:50.572429 kubelet[1863]: I0317 18:41:50.572412 1863 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:41:50.573549 kubelet[1863]: I0317 18:41:50.573527 1863 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:41:50.573599 kubelet[1863]: W0317 18:41:50.573571 1863 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 18:41:50.574103 kubelet[1863]: I0317 18:41:50.574084 1863 server.go:1264] "Started kubelet" Mar 17 18:41:50.574259 kubelet[1863]: I0317 18:41:50.574212 1863 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:41:50.574416 kubelet[1863]: I0317 18:41:50.574288 1863 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:41:50.574694 kubelet[1863]: I0317 18:41:50.574492 1863 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:41:50.575231 kubelet[1863]: I0317 18:41:50.575201 1863 server.go:455] "Adding debug handlers to kubelet server" Mar 17 18:41:50.574000 audit[1863]: AVC avc: denied { mac_admin } for pid=1863 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:41:50.574000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Mar 17 18:41:50.580017 kubelet[1863]: I0317 18:41:50.575456 1863 kubelet.go:1419] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Mar 17 18:41:50.580017 kubelet[1863]: I0317 18:41:50.575483 1863 kubelet.go:1423] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Mar 17 18:41:50.580017 kubelet[1863]: I0317 18:41:50.575524 1863 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:41:50.580017 kubelet[1863]: E0317 18:41:50.577179 1863 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:41:50.580017 kubelet[1863]: I0317 18:41:50.577207 1863 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 18:41:50.580017 kubelet[1863]: I0317 18:41:50.577297 1863 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 18:41:50.580017 kubelet[1863]: I0317 18:41:50.577345 1863 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:41:50.580017 kubelet[1863]: W0317 18:41:50.577623 1863 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Mar 17 18:41:50.580017 kubelet[1863]: E0317 18:41:50.577657 1863 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Mar 17 18:41:50.580223 kubelet[1863]: E0317 18:41:50.579093 1863 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.81:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.81:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182dab3d49574905 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-17 18:41:50.574061829 +0000 UTC m=+0.313039460,LastTimestamp:2025-03-17 18:41:50.574061829 +0000 UTC m=+0.313039460,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 17 18:41:50.580223 kubelet[1863]: E0317 18:41:50.579294 1863 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="200ms" Mar 17 18:41:50.580630 kernel: audit: type=1400 audit(1742236910.574:190): avc: denied { mac_admin } for pid=1863 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:41:50.580693 kernel: audit: type=1401 audit(1742236910.574:190): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Mar 17 18:41:50.580714 kernel: audit: type=1300 audit(1742236910.574:190): arch=c000003e syscall=188 success=no exit=-22 a0=c00064bbf0 a1=c000701878 a2=c00064bbc0 a3=25 items=0 ppid=1 pid=1863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:50.574000 audit[1863]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00064bbf0 a1=c000701878 a2=c00064bbc0 a3=25 items=0 ppid=1 pid=1863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:50.581271 kubelet[1863]: I0317 18:41:50.581247 1863 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:41:50.581330 kubelet[1863]: I0317 18:41:50.581305 1863 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:41:50.581678 kubelet[1863]: E0317 18:41:50.581647 1863 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:41:50.582223 kubelet[1863]: I0317 18:41:50.582199 1863 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:41:50.585413 kernel: audit: type=1327 audit(1742236910.574:190): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Mar 17 18:41:50.574000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Mar 17 18:41:50.589755 kernel: audit: type=1400 audit(1742236910.574:191): avc: denied { mac_admin } for pid=1863 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:41:50.574000 audit[1863]: AVC avc: denied { mac_admin } for pid=1863 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:41:50.574000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Mar 17 18:41:50.574000 audit[1863]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000c9eb80 a1=c000701890 a2=c00064bc80 a3=25 items=0 ppid=1 pid=1863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:50.574000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Mar 17 18:41:50.577000 audit[1875]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1875 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:41:50.577000 audit[1875]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff3a382c70 a2=0 a3=7fff3a382c5c items=0 ppid=1863 pid=1875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:50.577000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Mar 17 18:41:50.578000 audit[1876]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1876 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:41:50.578000 audit[1876]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe8c9b0c80 a2=0 a3=7ffe8c9b0c6c items=0 ppid=1863 pid=1876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:50.578000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Mar 17 18:41:50.580000 audit[1878]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1878 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:41:50.580000 audit[1878]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffcce539680 a2=0 a3=7ffcce53966c items=0 ppid=1863 pid=1878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:50.580000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Mar 17 18:41:50.582000 audit[1880]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1880 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:41:50.582000 audit[1880]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc83c19460 a2=0 a3=7ffc83c1944c items=0 ppid=1863 pid=1880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:50.582000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Mar 17 18:41:50.596000 audit[1885]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1885 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:41:50.596000 audit[1885]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffcd9e54040 a2=0 a3=7ffcd9e5402c items=0 ppid=1863 pid=1885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:50.596000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Mar 17 18:41:50.597354 kubelet[1863]: I0317 18:41:50.597208 1863 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:41:50.597000 audit[1888]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1888 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:41:50.597000 audit[1888]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffeb256d960 a2=0 a3=7ffeb256d94c items=0 ppid=1863 pid=1888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:50.597000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Mar 17 18:41:50.598432 kubelet[1863]: I0317 18:41:50.598414 1863 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:41:50.598469 kubelet[1863]: I0317 18:41:50.598442 1863 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:41:50.598469 kubelet[1863]: I0317 18:41:50.598462 1863 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 18:41:50.598516 kubelet[1863]: E0317 18:41:50.598499 1863 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:41:50.597000 audit[1889]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1889 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:41:50.597000 audit[1889]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffddcdbe220 a2=0 a3=10e3 items=0 ppid=1863 pid=1889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:50.597000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Mar 17 18:41:50.599102 kubelet[1863]: W0317 18:41:50.598925 1863 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Mar 17 18:41:50.599102 kubelet[1863]: E0317 18:41:50.598952 1863 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Mar 17 18:41:50.598000 audit[1891]: NETFILTER_CFG table=mangle:33 family=10 entries=1 op=nft_register_chain pid=1891 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:41:50.598000 audit[1891]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffedb8ad3a0 a2=0 a3=7ffedb8ad38c items=0 ppid=1863 pid=1891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:50.598000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Mar 17 18:41:50.599448 kubelet[1863]: I0317 18:41:50.599206 1863 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:41:50.599448 kubelet[1863]: I0317 18:41:50.599216 1863 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:41:50.599448 kubelet[1863]: I0317 18:41:50.599243 1863 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:41:50.598000 audit[1892]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1892 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:41:50.598000 audit[1892]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc96bf0590 a2=0 a3=7ffc96bf057c items=0 ppid=1863 pid=1892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:50.598000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Mar 17 18:41:50.599000 audit[1893]: NETFILTER_CFG table=nat:35 family=10 entries=2 op=nft_register_chain pid=1893 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:41:50.599000 audit[1893]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffc8cc80720 a2=0 a3=7ffc8cc8070c items=0 ppid=1863 pid=1893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:50.599000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Mar 17 18:41:50.599000 audit[1894]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_chain pid=1894 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:41:50.599000 audit[1894]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd02f831a0 a2=0 a3=7ffd02f8318c items=0 ppid=1863 pid=1894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:50.599000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Mar 17 18:41:50.600000 audit[1895]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1895 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:41:50.600000 audit[1895]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffdda44aab0 a2=0 a3=7ffdda44aa9c items=0 ppid=1863 pid=1895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:50.600000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Mar 17 18:41:50.679120 kubelet[1863]: I0317 18:41:50.679099 1863 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 18:41:50.679320 kubelet[1863]: E0317 18:41:50.679297 1863 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" Mar 17 18:41:50.699543 kubelet[1863]: E0317 18:41:50.699521 1863 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 18:41:50.779933 kubelet[1863]: E0317 18:41:50.779889 1863 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="400ms" Mar 17 18:41:50.870055 kubelet[1863]: I0317 18:41:50.869967 1863 policy_none.go:49] "None policy: Start" Mar 17 18:41:50.870773 kubelet[1863]: I0317 18:41:50.870755 1863 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:41:50.870839 kubelet[1863]: I0317 18:41:50.870780 1863 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:41:50.876185 kubelet[1863]: I0317 18:41:50.876160 1863 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:41:50.875000 audit[1863]: AVC avc: denied { mac_admin } for pid=1863 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:41:50.875000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Mar 17 18:41:50.875000 audit[1863]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000fd8cc0 a1=c000829c68 a2=c000fd8c90 a3=25 items=0 ppid=1 pid=1863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:50.875000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Mar 17 18:41:50.876474 kubelet[1863]: I0317 18:41:50.876221 1863 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Mar 17 18:41:50.876474 kubelet[1863]: I0317 18:41:50.876321 1863 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:41:50.876474 kubelet[1863]: I0317 18:41:50.876416 1863 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:41:50.877970 kubelet[1863]: E0317 18:41:50.877937 1863 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 17 18:41:50.880890 kubelet[1863]: I0317 18:41:50.880833 1863 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 18:41:50.881152 kubelet[1863]: E0317 18:41:50.881116 1863 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" Mar 17 18:41:50.900430 kubelet[1863]: I0317 18:41:50.900334 1863 topology_manager.go:215] "Topology Admit Handler" podUID="e5b18a04d8291a498f2ea6ee1d4dfbfb" podNamespace="kube-system" podName="kube-apiserver-localhost" Mar 17 18:41:50.901527 kubelet[1863]: I0317 18:41:50.901499 1863 topology_manager.go:215] "Topology Admit Handler" podUID="23a18e2dc14f395c5f1bea711a5a9344" podNamespace="kube-system" podName="kube-controller-manager-localhost" Mar 17 18:41:50.902244 kubelet[1863]: I0317 18:41:50.902226 1863 topology_manager.go:215] "Topology Admit Handler" podUID="d79ab404294384d4bcc36fb5b5509bbb" podNamespace="kube-system" podName="kube-scheduler-localhost" Mar 17 18:41:51.079998 kubelet[1863]: I0317 18:41:51.079955 1863 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e5b18a04d8291a498f2ea6ee1d4dfbfb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e5b18a04d8291a498f2ea6ee1d4dfbfb\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:41:51.079998 kubelet[1863]: I0317 18:41:51.079994 1863 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e5b18a04d8291a498f2ea6ee1d4dfbfb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e5b18a04d8291a498f2ea6ee1d4dfbfb\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:41:51.079998 kubelet[1863]: I0317 18:41:51.080013 1863 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e5b18a04d8291a498f2ea6ee1d4dfbfb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e5b18a04d8291a498f2ea6ee1d4dfbfb\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:41:51.080239 kubelet[1863]: I0317 18:41:51.080027 1863 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:41:51.080239 kubelet[1863]: I0317 18:41:51.080040 1863 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:41:51.080239 kubelet[1863]: I0317 18:41:51.080052 1863 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:41:51.080239 kubelet[1863]: I0317 18:41:51.080084 1863 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:41:51.080239 kubelet[1863]: I0317 18:41:51.080173 1863 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:41:51.080364 kubelet[1863]: I0317 18:41:51.080237 1863 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d79ab404294384d4bcc36fb5b5509bbb-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d79ab404294384d4bcc36fb5b5509bbb\") " pod="kube-system/kube-scheduler-localhost" Mar 17 18:41:51.180740 kubelet[1863]: E0317 18:41:51.180659 1863 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="800ms" Mar 17 18:41:51.206005 kubelet[1863]: E0317 18:41:51.205989 1863 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:51.206609 env[1309]: time="2025-03-17T18:41:51.206577726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e5b18a04d8291a498f2ea6ee1d4dfbfb,Namespace:kube-system,Attempt:0,}" Mar 17 18:41:51.206935 kubelet[1863]: E0317 18:41:51.206676 1863 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:51.207181 env[1309]: time="2025-03-17T18:41:51.207145280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d79ab404294384d4bcc36fb5b5509bbb,Namespace:kube-system,Attempt:0,}" Mar 17 18:41:51.208354 kubelet[1863]: E0317 18:41:51.208333 1863 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:51.208695 env[1309]: time="2025-03-17T18:41:51.208664269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:23a18e2dc14f395c5f1bea711a5a9344,Namespace:kube-system,Attempt:0,}" Mar 17 18:41:51.282980 kubelet[1863]: I0317 18:41:51.282917 1863 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 18:41:51.283145 kubelet[1863]: E0317 18:41:51.283124 1863 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" Mar 17 18:41:51.684264 kubelet[1863]: W0317 18:41:51.684187 1863 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Mar 17 18:41:51.684264 kubelet[1863]: E0317 18:41:51.684261 1863 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Mar 17 18:41:51.694270 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount692790335.mount: Deactivated successfully. Mar 17 18:41:51.699017 env[1309]: time="2025-03-17T18:41:51.698967257Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:51.701791 env[1309]: time="2025-03-17T18:41:51.701744293Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:51.703394 env[1309]: time="2025-03-17T18:41:51.703341518Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:51.704395 env[1309]: time="2025-03-17T18:41:51.704367752Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:51.705996 env[1309]: time="2025-03-17T18:41:51.705955760Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:51.707121 env[1309]: time="2025-03-17T18:41:51.707087903Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:51.708208 env[1309]: time="2025-03-17T18:41:51.708176454Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:51.709415 env[1309]: time="2025-03-17T18:41:51.709384409Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:51.710543 env[1309]: time="2025-03-17T18:41:51.710507755Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:51.712585 env[1309]: time="2025-03-17T18:41:51.712546958Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:51.713969 env[1309]: time="2025-03-17T18:41:51.713937165Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:51.715298 env[1309]: time="2025-03-17T18:41:51.715279051Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:41:51.735907 env[1309]: time="2025-03-17T18:41:51.735803514Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:41:51.736073 env[1309]: time="2025-03-17T18:41:51.736048664Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:41:51.736179 env[1309]: time="2025-03-17T18:41:51.736156997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:41:51.736432 env[1309]: time="2025-03-17T18:41:51.736410112Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1bc3123dd0dd2d9ad516bf85713ffbfe286804c31b3b3c1fb6b14e98241c3062 pid=1909 runtime=io.containerd.runc.v2 Mar 17 18:41:51.737472 env[1309]: time="2025-03-17T18:41:51.737297466Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:41:51.737472 env[1309]: time="2025-03-17T18:41:51.737360293Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:41:51.737472 env[1309]: time="2025-03-17T18:41:51.737370102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:41:51.737716 env[1309]: time="2025-03-17T18:41:51.737655617Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5d6c3d57e7060dfa4aceb4d7c926a62a107fb43c58f97228b8fb7755d0b08898 pid=1919 runtime=io.containerd.runc.v2 Mar 17 18:41:51.741063 env[1309]: time="2025-03-17T18:41:51.740839537Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:41:51.741063 env[1309]: time="2025-03-17T18:41:51.740912494Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:41:51.741063 env[1309]: time="2025-03-17T18:41:51.740925779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:41:51.741234 env[1309]: time="2025-03-17T18:41:51.741186347Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/704efe5adbf462a6f404f4671b1330c808fecbb9ee6726b86b0b9018e169dc94 pid=1942 runtime=io.containerd.runc.v2 Mar 17 18:41:51.786154 env[1309]: time="2025-03-17T18:41:51.786102345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:23a18e2dc14f395c5f1bea711a5a9344,Namespace:kube-system,Attempt:0,} returns sandbox id \"1bc3123dd0dd2d9ad516bf85713ffbfe286804c31b3b3c1fb6b14e98241c3062\"" Mar 17 18:41:51.790050 kubelet[1863]: E0317 18:41:51.788427 1863 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:51.794725 env[1309]: time="2025-03-17T18:41:51.794694836Z" level=info msg="CreateContainer within sandbox \"1bc3123dd0dd2d9ad516bf85713ffbfe286804c31b3b3c1fb6b14e98241c3062\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 18:41:51.795294 env[1309]: time="2025-03-17T18:41:51.795273911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e5b18a04d8291a498f2ea6ee1d4dfbfb,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d6c3d57e7060dfa4aceb4d7c926a62a107fb43c58f97228b8fb7755d0b08898\"" Mar 17 18:41:51.795899 kubelet[1863]: E0317 18:41:51.795846 1863 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:51.797898 env[1309]: time="2025-03-17T18:41:51.797545601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d79ab404294384d4bcc36fb5b5509bbb,Namespace:kube-system,Attempt:0,} returns sandbox id \"704efe5adbf462a6f404f4671b1330c808fecbb9ee6726b86b0b9018e169dc94\"" Mar 17 18:41:51.798554 kubelet[1863]: E0317 18:41:51.798405 1863 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:51.799288 env[1309]: time="2025-03-17T18:41:51.799248354Z" level=info msg="CreateContainer within sandbox \"5d6c3d57e7060dfa4aceb4d7c926a62a107fb43c58f97228b8fb7755d0b08898\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 18:41:51.799948 env[1309]: time="2025-03-17T18:41:51.799914332Z" level=info msg="CreateContainer within sandbox \"704efe5adbf462a6f404f4671b1330c808fecbb9ee6726b86b0b9018e169dc94\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 18:41:51.814675 env[1309]: time="2025-03-17T18:41:51.814635354Z" level=info msg="CreateContainer within sandbox \"1bc3123dd0dd2d9ad516bf85713ffbfe286804c31b3b3c1fb6b14e98241c3062\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3abd5df658551f65638e01a90394b5bc807574660d0e3f6c62bdc44d158a152d\"" Mar 17 18:41:51.815294 env[1309]: time="2025-03-17T18:41:51.815255898Z" level=info msg="StartContainer for \"3abd5df658551f65638e01a90394b5bc807574660d0e3f6c62bdc44d158a152d\"" Mar 17 18:41:51.821637 env[1309]: time="2025-03-17T18:41:51.821610943Z" level=info msg="CreateContainer within sandbox \"5d6c3d57e7060dfa4aceb4d7c926a62a107fb43c58f97228b8fb7755d0b08898\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"42dbc20aaaa3ba20ff3a449e3d391547c580b25364a88f8c0f4961b4192af07e\"" Mar 17 18:41:51.822145 env[1309]: time="2025-03-17T18:41:51.822114818Z" level=info msg="StartContainer for \"42dbc20aaaa3ba20ff3a449e3d391547c580b25364a88f8c0f4961b4192af07e\"" Mar 17 18:41:51.826458 env[1309]: time="2025-03-17T18:41:51.826424799Z" level=info msg="CreateContainer within sandbox \"704efe5adbf462a6f404f4671b1330c808fecbb9ee6726b86b0b9018e169dc94\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"04400f470c5040e7c7ee975192a9c75b1935b70f5993d81e2af73307e2839609\"" Mar 17 18:41:51.826735 env[1309]: time="2025-03-17T18:41:51.826710425Z" level=info msg="StartContainer for \"04400f470c5040e7c7ee975192a9c75b1935b70f5993d81e2af73307e2839609\"" Mar 17 18:41:51.836834 kubelet[1863]: W0317 18:41:51.836617 1863 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Mar 17 18:41:51.836834 kubelet[1863]: E0317 18:41:51.836685 1863 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Mar 17 18:41:51.878920 env[1309]: time="2025-03-17T18:41:51.877747610Z" level=info msg="StartContainer for \"42dbc20aaaa3ba20ff3a449e3d391547c580b25364a88f8c0f4961b4192af07e\" returns successfully" Mar 17 18:41:51.885075 env[1309]: time="2025-03-17T18:41:51.885051675Z" level=info msg="StartContainer for \"3abd5df658551f65638e01a90394b5bc807574660d0e3f6c62bdc44d158a152d\" returns successfully" Mar 17 18:41:51.888078 env[1309]: time="2025-03-17T18:41:51.888057451Z" level=info msg="StartContainer for \"04400f470c5040e7c7ee975192a9c75b1935b70f5993d81e2af73307e2839609\" returns successfully" Mar 17 18:41:51.913162 kubelet[1863]: W0317 18:41:51.913080 1863 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.81:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Mar 17 18:41:51.913162 kubelet[1863]: E0317 18:41:51.913145 1863 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.81:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Mar 17 18:41:52.084782 kubelet[1863]: I0317 18:41:52.084469 1863 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 18:41:52.603895 kubelet[1863]: E0317 18:41:52.603868 1863 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:52.605313 kubelet[1863]: E0317 18:41:52.605294 1863 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:52.606646 kubelet[1863]: E0317 18:41:52.606627 1863 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:52.828779 kubelet[1863]: E0317 18:41:52.828730 1863 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 17 18:41:52.927951 kubelet[1863]: I0317 18:41:52.927838 1863 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Mar 17 18:41:52.933487 kubelet[1863]: E0317 18:41:52.933461 1863 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:41:53.034027 kubelet[1863]: E0317 18:41:53.033999 1863 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:41:53.134948 kubelet[1863]: E0317 18:41:53.134899 1863 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:41:53.235617 kubelet[1863]: E0317 18:41:53.235495 1863 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:41:53.336060 kubelet[1863]: E0317 18:41:53.336021 1863 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:41:53.436472 kubelet[1863]: E0317 18:41:53.436430 1863 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:41:53.537106 kubelet[1863]: E0317 18:41:53.537002 1863 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:41:53.613122 kubelet[1863]: E0317 18:41:53.613088 1863 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 17 18:41:53.613486 kubelet[1863]: E0317 18:41:53.613466 1863 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:54.560541 kubelet[1863]: I0317 18:41:54.560507 1863 apiserver.go:52] "Watching apiserver" Mar 17 18:41:54.577625 kubelet[1863]: I0317 18:41:54.577593 1863 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 18:41:54.694116 systemd[1]: Reloading. Mar 17 18:41:54.750964 /usr/lib/systemd/system-generators/torcx-generator[2161]: time="2025-03-17T18:41:54Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:41:54.750990 /usr/lib/systemd/system-generators/torcx-generator[2161]: time="2025-03-17T18:41:54Z" level=info msg="torcx already run" Mar 17 18:41:54.834579 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:41:54.834597 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:41:54.850934 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:41:54.926598 kubelet[1863]: I0317 18:41:54.926575 1863 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:41:54.926611 systemd[1]: Stopping kubelet.service... Mar 17 18:41:54.945292 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:41:54.945514 systemd[1]: Stopped kubelet.service. Mar 17 18:41:54.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:54.946381 kernel: kauditd_printk_skb: 43 callbacks suppressed Mar 17 18:41:54.946426 kernel: audit: type=1131 audit(1742236914.944:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:54.947013 systemd[1]: Starting kubelet.service... Mar 17 18:41:55.032488 systemd[1]: Started kubelet.service. Mar 17 18:41:55.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:55.036893 kernel: audit: type=1130 audit(1742236915.031:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:55.069624 kubelet[2216]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:41:55.069624 kubelet[2216]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:41:55.069624 kubelet[2216]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:41:55.070040 kubelet[2216]: I0317 18:41:55.069656 2216 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:41:55.073844 kubelet[2216]: I0317 18:41:55.073810 2216 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 18:41:55.073844 kubelet[2216]: I0317 18:41:55.073837 2216 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:41:55.074076 kubelet[2216]: I0317 18:41:55.074055 2216 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 18:41:55.075211 kubelet[2216]: I0317 18:41:55.075188 2216 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 18:41:55.076301 kubelet[2216]: I0317 18:41:55.076247 2216 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:41:55.085600 kubelet[2216]: I0317 18:41:55.085084 2216 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:41:55.085600 kubelet[2216]: I0317 18:41:55.085542 2216 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:41:55.085769 kubelet[2216]: I0317 18:41:55.085569 2216 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 18:41:55.085868 kubelet[2216]: I0317 18:41:55.085782 2216 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:41:55.085868 kubelet[2216]: I0317 18:41:55.085792 2216 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 18:41:55.085868 kubelet[2216]: I0317 18:41:55.085849 2216 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:41:55.085944 kubelet[2216]: I0317 18:41:55.085933 2216 kubelet.go:400] "Attempting to sync node with API server" Mar 17 18:41:55.085968 kubelet[2216]: I0317 18:41:55.085948 2216 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:41:55.086004 kubelet[2216]: I0317 18:41:55.085994 2216 kubelet.go:312] "Adding apiserver pod source" Mar 17 18:41:55.086029 kubelet[2216]: I0317 18:41:55.086007 2216 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:41:55.089046 kubelet[2216]: I0317 18:41:55.089018 2216 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:41:55.090657 kubelet[2216]: I0317 18:41:55.089592 2216 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:41:55.090657 kubelet[2216]: I0317 18:41:55.090182 2216 server.go:1264] "Started kubelet" Mar 17 18:41:55.090794 kubelet[2216]: I0317 18:41:55.090693 2216 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:41:55.091641 kubelet[2216]: I0317 18:41:55.091558 2216 server.go:455] "Adding debug handlers to kubelet server" Mar 17 18:41:55.091984 kubelet[2216]: I0317 18:41:55.091896 2216 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:41:55.092309 kubelet[2216]: I0317 18:41:55.092288 2216 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:41:55.101883 kernel: audit: type=1400 audit(1742236915.093:207): avc: denied { mac_admin } for pid=2216 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:41:55.101950 kernel: audit: type=1401 audit(1742236915.093:207): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Mar 17 18:41:55.101977 kernel: audit: type=1300 audit(1742236915.093:207): arch=c000003e syscall=188 success=no exit=-22 a0=c000921950 a1=c0008ee960 a2=c000921920 a3=25 items=0 ppid=1 pid=2216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:55.093000 audit[2216]: AVC avc: denied { mac_admin } for pid=2216 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:41:55.093000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Mar 17 18:41:55.093000 audit[2216]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000921950 a1=c0008ee960 a2=c000921920 a3=25 items=0 ppid=1 pid=2216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:55.102113 kubelet[2216]: I0317 18:41:55.094453 2216 kubelet.go:1419] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Mar 17 18:41:55.102113 kubelet[2216]: I0317 18:41:55.094481 2216 kubelet.go:1423] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Mar 17 18:41:55.102113 kubelet[2216]: I0317 18:41:55.094498 2216 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:41:55.102113 kubelet[2216]: E0317 18:41:55.096938 2216 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:41:55.102113 kubelet[2216]: I0317 18:41:55.097082 2216 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 18:41:55.102113 kubelet[2216]: I0317 18:41:55.097153 2216 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 18:41:55.102113 kubelet[2216]: I0317 18:41:55.097228 2216 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:41:55.103890 kubelet[2216]: I0317 18:41:55.102999 2216 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:41:55.103890 kubelet[2216]: I0317 18:41:55.103014 2216 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:41:55.103890 kubelet[2216]: I0317 18:41:55.103075 2216 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:41:55.093000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Mar 17 18:41:55.109283 kernel: audit: type=1327 audit(1742236915.093:207): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Mar 17 18:41:55.093000 audit[2216]: AVC avc: denied { mac_admin } for pid=2216 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:41:55.112414 kernel: audit: type=1400 audit(1742236915.093:208): avc: denied { mac_admin } for pid=2216 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:41:55.114089 kernel: audit: type=1401 audit(1742236915.093:208): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Mar 17 18:41:55.093000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Mar 17 18:41:55.114360 kubelet[2216]: I0317 18:41:55.114312 2216 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:41:55.093000 audit[2216]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0008418e0 a1=c0008ee978 a2=c0009219e0 a3=25 items=0 ppid=1 pid=2216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:55.119881 kernel: audit: type=1300 audit(1742236915.093:208): arch=c000003e syscall=188 success=no exit=-22 a0=c0008418e0 a1=c0008ee978 a2=c0009219e0 a3=25 items=0 ppid=1 pid=2216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:55.119915 kubelet[2216]: I0317 18:41:55.115912 2216 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:41:55.119915 kubelet[2216]: I0317 18:41:55.115941 2216 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:41:55.119915 kubelet[2216]: I0317 18:41:55.115964 2216 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 18:41:55.119915 kubelet[2216]: E0317 18:41:55.116005 2216 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:41:55.093000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Mar 17 18:41:55.124915 kernel: audit: type=1327 audit(1742236915.093:208): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Mar 17 18:41:55.141537 kubelet[2216]: I0317 18:41:55.141498 2216 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:41:55.141537 kubelet[2216]: I0317 18:41:55.141515 2216 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:41:55.141537 kubelet[2216]: I0317 18:41:55.141532 2216 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:41:55.141713 kubelet[2216]: I0317 18:41:55.141657 2216 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 18:41:55.141713 kubelet[2216]: I0317 18:41:55.141667 2216 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 18:41:55.141713 kubelet[2216]: I0317 18:41:55.141685 2216 policy_none.go:49] "None policy: Start" Mar 17 18:41:55.142200 kubelet[2216]: I0317 18:41:55.142175 2216 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:41:55.142200 kubelet[2216]: I0317 18:41:55.142206 2216 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:41:55.142376 kubelet[2216]: I0317 18:41:55.142362 2216 state_mem.go:75] "Updated machine memory state" Mar 17 18:41:55.143590 kubelet[2216]: I0317 18:41:55.143568 2216 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:41:55.142000 audit[2216]: AVC avc: denied { mac_admin } for pid=2216 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:41:55.142000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Mar 17 18:41:55.142000 audit[2216]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00121cb70 a1=c000f2ed80 a2=c00121cb40 a3=25 items=0 ppid=1 pid=2216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:41:55.142000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Mar 17 18:41:55.143807 kubelet[2216]: I0317 18:41:55.143640 2216 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Mar 17 18:41:55.143807 kubelet[2216]: I0317 18:41:55.143765 2216 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:41:55.143874 kubelet[2216]: I0317 18:41:55.143866 2216 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:41:55.200452 kubelet[2216]: I0317 18:41:55.200418 2216 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 18:41:55.208212 kubelet[2216]: I0317 18:41:55.208183 2216 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Mar 17 18:41:55.208314 kubelet[2216]: I0317 18:41:55.208261 2216 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Mar 17 18:41:55.216969 kubelet[2216]: I0317 18:41:55.216926 2216 topology_manager.go:215] "Topology Admit Handler" podUID="e5b18a04d8291a498f2ea6ee1d4dfbfb" podNamespace="kube-system" podName="kube-apiserver-localhost" Mar 17 18:41:55.217051 kubelet[2216]: I0317 18:41:55.217009 2216 topology_manager.go:215] "Topology Admit Handler" podUID="23a18e2dc14f395c5f1bea711a5a9344" podNamespace="kube-system" podName="kube-controller-manager-localhost" Mar 17 18:41:55.217077 kubelet[2216]: I0317 18:41:55.217058 2216 topology_manager.go:215] "Topology Admit Handler" podUID="d79ab404294384d4bcc36fb5b5509bbb" podNamespace="kube-system" podName="kube-scheduler-localhost" Mar 17 18:41:55.398884 kubelet[2216]: I0317 18:41:55.398724 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:41:55.398884 kubelet[2216]: I0317 18:41:55.398797 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d79ab404294384d4bcc36fb5b5509bbb-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d79ab404294384d4bcc36fb5b5509bbb\") " pod="kube-system/kube-scheduler-localhost" Mar 17 18:41:55.398884 kubelet[2216]: I0317 18:41:55.398833 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e5b18a04d8291a498f2ea6ee1d4dfbfb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e5b18a04d8291a498f2ea6ee1d4dfbfb\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:41:55.398884 kubelet[2216]: I0317 18:41:55.398869 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:41:55.399123 kubelet[2216]: I0317 18:41:55.398891 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:41:55.399123 kubelet[2216]: I0317 18:41:55.398909 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:41:55.399123 kubelet[2216]: I0317 18:41:55.398927 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e5b18a04d8291a498f2ea6ee1d4dfbfb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e5b18a04d8291a498f2ea6ee1d4dfbfb\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:41:55.399123 kubelet[2216]: I0317 18:41:55.398942 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e5b18a04d8291a498f2ea6ee1d4dfbfb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e5b18a04d8291a498f2ea6ee1d4dfbfb\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:41:55.399123 kubelet[2216]: I0317 18:41:55.398958 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:41:55.522777 kubelet[2216]: E0317 18:41:55.522707 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:55.524044 kubelet[2216]: E0317 18:41:55.524028 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:55.524138 kubelet[2216]: E0317 18:41:55.524117 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:56.089973 kubelet[2216]: I0317 18:41:56.087803 2216 apiserver.go:52] "Watching apiserver" Mar 17 18:41:56.100070 kubelet[2216]: I0317 18:41:56.099990 2216 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 18:41:56.126798 kubelet[2216]: E0317 18:41:56.126743 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:56.127350 kubelet[2216]: E0317 18:41:56.127321 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:56.142892 kubelet[2216]: E0317 18:41:56.140619 2216 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 17 18:41:56.142892 kubelet[2216]: E0317 18:41:56.141039 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:56.185251 kubelet[2216]: I0317 18:41:56.185193 2216 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.1851748930000001 podStartE2EDuration="1.185174893s" podCreationTimestamp="2025-03-17 18:41:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:41:56.173416286 +0000 UTC m=+1.136985322" watchObservedRunningTime="2025-03-17 18:41:56.185174893 +0000 UTC m=+1.148743939" Mar 17 18:41:56.194393 kubelet[2216]: I0317 18:41:56.194349 2216 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.194339099 podStartE2EDuration="1.194339099s" podCreationTimestamp="2025-03-17 18:41:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:41:56.18536078 +0000 UTC m=+1.148929826" watchObservedRunningTime="2025-03-17 18:41:56.194339099 +0000 UTC m=+1.157908135" Mar 17 18:41:57.128043 kubelet[2216]: E0317 18:41:57.128004 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:59.681000 audit[1470]: USER_END pid=1470 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Mar 17 18:41:59.681000 audit[1470]: CRED_DISP pid=1470 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Mar 17 18:41:59.682499 sudo[1470]: pam_unix(sudo:session): session closed for user root Mar 17 18:41:59.685335 sshd[1465]: pam_unix(sshd:session): session closed for user core Mar 17 18:41:59.685000 audit[1465]: USER_END pid=1465 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:41:59.685000 audit[1465]: CRED_DISP pid=1465 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:41:59.689286 systemd[1]: sshd@6-10.0.0.81:22-10.0.0.1:41824.service: Deactivated successfully. Mar 17 18:41:59.687000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.81:22-10.0.0.1:41824 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:41:59.690640 systemd-logind[1293]: Session 7 logged out. Waiting for processes to exit. Mar 17 18:41:59.690713 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 18:41:59.691819 systemd-logind[1293]: Removed session 7. Mar 17 18:42:01.133675 kubelet[2216]: E0317 18:42:01.133217 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:01.142708 kubelet[2216]: E0317 18:42:01.142670 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:01.174265 kubelet[2216]: I0317 18:42:01.174169 2216 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=6.174148407 podStartE2EDuration="6.174148407s" podCreationTimestamp="2025-03-17 18:41:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:41:56.194797891 +0000 UTC m=+1.158366937" watchObservedRunningTime="2025-03-17 18:42:01.174148407 +0000 UTC m=+6.137717453" Mar 17 18:42:04.107187 kubelet[2216]: E0317 18:42:04.107109 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:04.155922 kubelet[2216]: E0317 18:42:04.154693 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:05.301091 kubelet[2216]: E0317 18:42:05.298632 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:06.165531 kubelet[2216]: E0317 18:42:06.165464 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:06.546352 update_engine[1300]: I0317 18:42:06.546191 1300 update_attempter.cc:509] Updating boot flags... Mar 17 18:42:09.128715 kubelet[2216]: I0317 18:42:09.128651 2216 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 18:42:09.129267 kubelet[2216]: I0317 18:42:09.129246 2216 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 18:42:09.129316 env[1309]: time="2025-03-17T18:42:09.129065772Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 18:42:09.796539 kubelet[2216]: I0317 18:42:09.796491 2216 topology_manager.go:215] "Topology Admit Handler" podUID="ef133f45-b023-41ba-ba4a-15228ffe9507" podNamespace="kube-system" podName="kube-proxy-265qh" Mar 17 18:42:09.814783 kubelet[2216]: W0317 18:42:09.813130 2216 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Mar 17 18:42:09.814783 kubelet[2216]: E0317 18:42:09.814378 2216 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Mar 17 18:42:09.815537 kubelet[2216]: W0317 18:42:09.814588 2216 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Mar 17 18:42:09.815537 kubelet[2216]: E0317 18:42:09.815198 2216 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Mar 17 18:42:09.930474 kubelet[2216]: I0317 18:42:09.929983 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2fqx\" (UniqueName: \"kubernetes.io/projected/ef133f45-b023-41ba-ba4a-15228ffe9507-kube-api-access-l2fqx\") pod \"kube-proxy-265qh\" (UID: \"ef133f45-b023-41ba-ba4a-15228ffe9507\") " pod="kube-system/kube-proxy-265qh" Mar 17 18:42:09.930474 kubelet[2216]: I0317 18:42:09.930059 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ef133f45-b023-41ba-ba4a-15228ffe9507-kube-proxy\") pod \"kube-proxy-265qh\" (UID: \"ef133f45-b023-41ba-ba4a-15228ffe9507\") " pod="kube-system/kube-proxy-265qh" Mar 17 18:42:09.930474 kubelet[2216]: I0317 18:42:09.930085 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef133f45-b023-41ba-ba4a-15228ffe9507-xtables-lock\") pod \"kube-proxy-265qh\" (UID: \"ef133f45-b023-41ba-ba4a-15228ffe9507\") " pod="kube-system/kube-proxy-265qh" Mar 17 18:42:09.930474 kubelet[2216]: I0317 18:42:09.930110 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef133f45-b023-41ba-ba4a-15228ffe9507-lib-modules\") pod \"kube-proxy-265qh\" (UID: \"ef133f45-b023-41ba-ba4a-15228ffe9507\") " pod="kube-system/kube-proxy-265qh" Mar 17 18:42:10.075989 kubelet[2216]: I0317 18:42:10.075804 2216 topology_manager.go:215] "Topology Admit Handler" podUID="e73a7b4e-6ed6-4e15-8319-820426909a23" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-5kq5k" Mar 17 18:42:10.233214 kubelet[2216]: I0317 18:42:10.233121 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e73a7b4e-6ed6-4e15-8319-820426909a23-var-lib-calico\") pod \"tigera-operator-7bc55997bb-5kq5k\" (UID: \"e73a7b4e-6ed6-4e15-8319-820426909a23\") " pod="tigera-operator/tigera-operator-7bc55997bb-5kq5k" Mar 17 18:42:10.233214 kubelet[2216]: I0317 18:42:10.233184 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8n7t\" (UniqueName: \"kubernetes.io/projected/e73a7b4e-6ed6-4e15-8319-820426909a23-kube-api-access-x8n7t\") pod \"tigera-operator-7bc55997bb-5kq5k\" (UID: \"e73a7b4e-6ed6-4e15-8319-820426909a23\") " pod="tigera-operator/tigera-operator-7bc55997bb-5kq5k" Mar 17 18:42:10.387498 env[1309]: time="2025-03-17T18:42:10.386946312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-5kq5k,Uid:e73a7b4e-6ed6-4e15-8319-820426909a23,Namespace:tigera-operator,Attempt:0,}" Mar 17 18:42:10.436391 env[1309]: time="2025-03-17T18:42:10.436020794Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:42:10.436391 env[1309]: time="2025-03-17T18:42:10.436078343Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:42:10.436391 env[1309]: time="2025-03-17T18:42:10.436100474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:42:10.436930 env[1309]: time="2025-03-17T18:42:10.436885191Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b8f62570490ee4cb3d7154ebbb190fceac17b9a6325a13d2c5862bef0f7961cb pid=2326 runtime=io.containerd.runc.v2 Mar 17 18:42:10.478325 systemd[1]: run-containerd-runc-k8s.io-b8f62570490ee4cb3d7154ebbb190fceac17b9a6325a13d2c5862bef0f7961cb-runc.olhf4n.mount: Deactivated successfully. Mar 17 18:42:10.569634 env[1309]: time="2025-03-17T18:42:10.569579907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-5kq5k,Uid:e73a7b4e-6ed6-4e15-8319-820426909a23,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b8f62570490ee4cb3d7154ebbb190fceac17b9a6325a13d2c5862bef0f7961cb\"" Mar 17 18:42:10.572730 env[1309]: time="2025-03-17T18:42:10.572685667Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Mar 17 18:42:11.040046 kubelet[2216]: E0317 18:42:11.038464 2216 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 17 18:42:11.040046 kubelet[2216]: E0317 18:42:11.038576 2216 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ef133f45-b023-41ba-ba4a-15228ffe9507-kube-proxy podName:ef133f45-b023-41ba-ba4a-15228ffe9507 nodeName:}" failed. No retries permitted until 2025-03-17 18:42:11.53854451 +0000 UTC m=+16.502113556 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/ef133f45-b023-41ba-ba4a-15228ffe9507-kube-proxy") pod "kube-proxy-265qh" (UID: "ef133f45-b023-41ba-ba4a-15228ffe9507") : failed to sync configmap cache: timed out waiting for the condition Mar 17 18:42:11.600678 kubelet[2216]: E0317 18:42:11.600567 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:11.602559 env[1309]: time="2025-03-17T18:42:11.602118184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-265qh,Uid:ef133f45-b023-41ba-ba4a-15228ffe9507,Namespace:kube-system,Attempt:0,}" Mar 17 18:42:11.636651 env[1309]: time="2025-03-17T18:42:11.636409337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:42:11.636651 env[1309]: time="2025-03-17T18:42:11.636463169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:42:11.636651 env[1309]: time="2025-03-17T18:42:11.636475232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:42:11.637130 env[1309]: time="2025-03-17T18:42:11.637055360Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/94fa387350a66a370ee0ed5efa247895c2b26889e2f73a09915c5afb537a95a1 pid=2367 runtime=io.containerd.runc.v2 Mar 17 18:42:11.700132 env[1309]: time="2025-03-17T18:42:11.699609661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-265qh,Uid:ef133f45-b023-41ba-ba4a-15228ffe9507,Namespace:kube-system,Attempt:0,} returns sandbox id \"94fa387350a66a370ee0ed5efa247895c2b26889e2f73a09915c5afb537a95a1\"" Mar 17 18:42:11.700361 kubelet[2216]: E0317 18:42:11.700319 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:11.704227 env[1309]: time="2025-03-17T18:42:11.704102452Z" level=info msg="CreateContainer within sandbox \"94fa387350a66a370ee0ed5efa247895c2b26889e2f73a09915c5afb537a95a1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 18:42:11.749654 env[1309]: time="2025-03-17T18:42:11.749360115Z" level=info msg="CreateContainer within sandbox \"94fa387350a66a370ee0ed5efa247895c2b26889e2f73a09915c5afb537a95a1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"89928c1fed9d62abbb53d93f4d65f76fd4156507e2bd9e7cf56140abae4edd57\"" Mar 17 18:42:11.750179 env[1309]: time="2025-03-17T18:42:11.750147805Z" level=info msg="StartContainer for \"89928c1fed9d62abbb53d93f4d65f76fd4156507e2bd9e7cf56140abae4edd57\"" Mar 17 18:42:11.821322 env[1309]: time="2025-03-17T18:42:11.821237937Z" level=info msg="StartContainer for \"89928c1fed9d62abbb53d93f4d65f76fd4156507e2bd9e7cf56140abae4edd57\" returns successfully" Mar 17 18:42:11.939526 kernel: kauditd_printk_skb: 9 callbacks suppressed Mar 17 18:42:11.939709 kernel: audit: type=1325 audit(1742236931.934:215): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2462 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:42:11.934000 audit[2462]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2462 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:42:11.934000 audit[2462]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd0dc507d0 a2=0 a3=7ffd0dc507bc items=0 ppid=2420 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:11.934000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Mar 17 18:42:11.949121 kernel: audit: type=1300 audit(1742236931.934:215): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd0dc507d0 a2=0 a3=7ffd0dc507bc items=0 ppid=2420 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:11.949240 kernel: audit: type=1327 audit(1742236931.934:215): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Mar 17 18:42:11.952768 kernel: audit: type=1325 audit(1742236931.940:216): table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2463 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:42:11.940000 audit[2463]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2463 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:42:11.940000 audit[2463]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcc7258620 a2=0 a3=7ffcc725860c items=0 ppid=2420 pid=2463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:11.940000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Mar 17 18:42:11.968568 kernel: audit: type=1300 audit(1742236931.940:216): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcc7258620 a2=0 a3=7ffcc725860c items=0 ppid=2420 pid=2463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:11.968712 kernel: audit: type=1327 audit(1742236931.940:216): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Mar 17 18:42:11.968739 kernel: audit: type=1325 audit(1742236931.941:217): table=nat:40 family=2 entries=1 op=nft_register_chain pid=2464 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:42:11.941000 audit[2464]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_chain pid=2464 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:42:11.971764 kernel: audit: type=1300 audit(1742236931.941:217): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd87fd9720 a2=0 a3=7ffd87fd970c items=0 ppid=2420 pid=2464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:11.941000 audit[2464]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd87fd9720 a2=0 a3=7ffd87fd970c items=0 ppid=2420 pid=2464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:11.979916 kernel: audit: type=1327 audit(1742236931.941:217): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Mar 17 18:42:11.941000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Mar 17 18:42:11.983548 kernel: audit: type=1325 audit(1742236931.942:218): table=nat:41 family=10 entries=1 op=nft_register_chain pid=2465 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:42:11.942000 audit[2465]: NETFILTER_CFG table=nat:41 family=10 entries=1 op=nft_register_chain pid=2465 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:42:11.942000 audit[2465]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff058dd5b0 a2=0 a3=7fff058dd59c items=0 ppid=2420 pid=2465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:11.942000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Mar 17 18:42:11.944000 audit[2467]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_chain pid=2467 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:42:11.944000 audit[2467]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff50dfd610 a2=0 a3=7fff50dfd5fc items=0 ppid=2420 pid=2467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:11.944000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Mar 17 18:42:11.948000 audit[2466]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2466 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:42:11.948000 audit[2466]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff3c24e7c0 a2=0 a3=7fff3c24e7ac items=0 ppid=2420 pid=2466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:11.948000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Mar 17 18:42:12.034000 audit[2468]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2468 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:42:12.034000 audit[2468]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fffd2ad8f20 a2=0 a3=7fffd2ad8f0c items=0 ppid=2420 pid=2468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:12.034000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Mar 17 18:42:12.038000 audit[2470]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2470 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:42:12.038000 audit[2470]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd0a375fb0 a2=0 a3=7ffd0a375f9c items=0 ppid=2420 pid=2470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:12.038000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Mar 17 18:42:12.049000 audit[2473]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2473 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:42:12.049000 audit[2473]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffdae2c5480 a2=0 a3=7ffdae2c546c items=0 ppid=2420 pid=2473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:12.049000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Mar 17 18:42:12.052000 audit[2474]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2474 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:42:12.052000 audit[2474]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff7723ab90 a2=0 a3=7fff7723ab7c items=0 ppid=2420 pid=2474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:12.052000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Mar 17 18:42:12.054000 audit[2476]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2476 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:42:12.054000 audit[2476]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc26691170 a2=0 a3=7ffc2669115c items=0 ppid=2420 pid=2476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:12.054000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Mar 17 18:42:12.058000 audit[2477]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2477 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:42:12.058000 audit[2477]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd1bdc1380 a2=0 a3=7ffd1bdc136c items=0 ppid=2420 pid=2477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:12.058000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Mar 17 18:42:12.061000 audit[2479]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2479 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:42:12.061000 audit[2479]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff68fedf90 a2=0 a3=7fff68fedf7c items=0 ppid=2420 pid=2479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:12.061000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Mar 17 18:42:12.074000 audit[2482]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2482 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:42:12.074000 audit[2482]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe2fc99010 a2=0 a3=7ffe2fc98ffc items=0 ppid=2420 pid=2482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:12.074000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Mar 17 18:42:12.084000 audit[2483]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2483 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:42:12.084000 audit[2483]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffee2746530 a2=0 a3=7ffee274651c items=0 ppid=2420 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:12.084000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Mar 17 18:42:12.092000 audit[2485]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2485 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:42:12.092000 audit[2485]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc1183d390 a2=0 a3=7ffc1183d37c items=0 ppid=2420 pid=2485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:12.092000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Mar 17 18:42:12.104000 audit[2486]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2486 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:42:12.104000 audit[2486]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffce8ecba50 a2=0 a3=7ffce8ecba3c items=0 ppid=2420 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:12.104000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Mar 17 18:42:12.113000 audit[2488]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2488 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:42:12.113000 audit[2488]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc568be020 a2=0 a3=7ffc568be00c items=0 ppid=2420 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:12.113000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Mar 17 18:42:12.132000 audit[2491]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2491 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:42:12.132000 audit[2491]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcdc2b6dd0 a2=0 a3=7ffcdc2b6dbc items=0 ppid=2420 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:12.132000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Mar 17 18:42:12.159000 audit[2494]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2494 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:42:12.159000 audit[2494]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc5ccb7210 a2=0 a3=7ffc5ccb71fc items=0 ppid=2420 pid=2494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:12.159000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Mar 17 18:42:12.164000 audit[2495]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2495 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:42:12.164000 audit[2495]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc1cc2bb70 a2=0 a3=7ffc1cc2bb5c items=0 ppid=2420 pid=2495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:12.164000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Mar 17 18:42:12.175000 audit[2497]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2497 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:42:12.175000 audit[2497]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff44fad0b0 a2=0 a3=7fff44fad09c items=0 ppid=2420 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:12.175000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Mar 17 18:42:12.186000 audit[2500]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2500 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:42:12.186000 audit[2500]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe5eff5b30 a2=0 a3=7ffe5eff5b1c items=0 ppid=2420 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:12.186000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Mar 17 18:42:12.188000 audit[2501]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2501 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:42:12.188000 audit[2501]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe9cc9afb0 a2=0 a3=7ffe9cc9af9c items=0 ppid=2420 pid=2501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:12.188000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Mar 17 18:42:12.190120 kubelet[2216]: E0317 18:42:12.186584 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:12.192000 audit[2503]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2503 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:42:12.192000 audit[2503]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffd63756720 a2=0 a3=7ffd6375670c items=0 ppid=2420 pid=2503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:12.192000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Mar 17 18:42:12.269000 audit[2509]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2509 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:42:12.269000 audit[2509]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffd0c4f10c0 a2=0 a3=7ffd0c4f10ac items=0 ppid=2420 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:12.269000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:42:12.283000 audit[2509]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2509 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:42:12.283000 audit[2509]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffd0c4f10c0 a2=0 a3=7ffd0c4f10ac items=0 ppid=2420 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:12.283000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:42:12.285000 audit[2514]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2514 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:42:12.285000 audit[2514]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffeb7d94dd0 a2=0 a3=7ffeb7d94dbc items=0 ppid=2420 pid=2514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:12.285000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Mar 17 18:42:12.289000 audit[2516]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2516 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:42:12.289000 audit[2516]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffee968d520 a2=0 a3=7ffee968d50c items=0 ppid=2420 pid=2516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:12.289000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Mar 17 18:42:12.301000 audit[2519]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2519 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:42:12.301000 audit[2519]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffca89cf230 a2=0 a3=7ffca89cf21c items=0 ppid=2420 pid=2519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:12.301000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Mar 17 18:42:12.307000 audit[2520]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2520 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:42:12.307000 audit[2520]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd7737f120 a2=0 a3=7ffd7737f10c items=0 ppid=2420 pid=2520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:12.307000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Mar 17 18:42:12.311000 audit[2522]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2522 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:42:12.311000 audit[2522]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe7af06560 a2=0 a3=7ffe7af0654c items=0 ppid=2420 pid=2522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:12.311000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Mar 17 18:42:12.313000 audit[2523]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2523 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:42:12.313000 audit[2523]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffcfc45d70 a2=0 a3=7fffcfc45d5c items=0 ppid=2420 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:12.313000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Mar 17 18:42:12.320000 audit[2525]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2525 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:42:12.320000 audit[2525]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd02bff230 a2=0 a3=7ffd02bff21c items=0 ppid=2420 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:12.320000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Mar 17 18:42:12.329000 audit[2528]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2528 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:42:12.329000 audit[2528]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fff1df64940 a2=0 a3=7fff1df6492c items=0 ppid=2420 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:12.329000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Mar 17 18:42:12.332000 audit[2529]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2529 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:42:12.332000 audit[2529]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffbf867fd0 a2=0 a3=7fffbf867fbc items=0 ppid=2420 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:12.332000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Mar 17 18:42:12.336000 audit[2531]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2531 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:42:12.336000 audit[2531]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdb058bb30 a2=0 a3=7ffdb058bb1c items=0 ppid=2420 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:12.336000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Mar 17 18:42:12.338000 audit[2532]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2532 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:42:12.338000 audit[2532]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff512d73a0 a2=0 a3=7fff512d738c items=0 ppid=2420 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:12.338000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Mar 17 18:42:12.341000 audit[2534]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2534 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:42:12.341000 audit[2534]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffe68a32d0 a2=0 a3=7fffe68a32bc items=0 ppid=2420 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:12.341000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Mar 17 18:42:12.352000 audit[2537]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2537 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:42:12.352000 audit[2537]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd0c1b54a0 a2=0 a3=7ffd0c1b548c items=0 ppid=2420 pid=2537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:12.352000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Mar 17 18:42:12.354781 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount954439235.mount: Deactivated successfully. Mar 17 18:42:12.368000 audit[2540]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2540 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:42:12.368000 audit[2540]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcfeb372f0 a2=0 a3=7ffcfeb372dc items=0 ppid=2420 pid=2540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:12.368000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Mar 17 18:42:12.374000 audit[2541]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2541 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:42:12.374000 audit[2541]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffe574e8d0 a2=0 a3=7fffe574e8bc items=0 ppid=2420 pid=2541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:12.374000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Mar 17 18:42:12.377000 audit[2543]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2543 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:42:12.377000 audit[2543]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fffe4ea5930 a2=0 a3=7fffe4ea591c items=0 ppid=2420 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:12.377000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Mar 17 18:42:12.381000 audit[2546]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2546 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:42:12.381000 audit[2546]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffc896e5750 a2=0 a3=7ffc896e573c items=0 ppid=2420 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:12.381000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Mar 17 18:42:12.387000 audit[2547]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2547 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:42:12.387000 audit[2547]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcec18f3e0 a2=0 a3=7ffcec18f3cc items=0 ppid=2420 pid=2547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:12.387000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Mar 17 18:42:12.395000 audit[2549]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2549 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:42:12.395000 audit[2549]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffd51083060 a2=0 a3=7ffd5108304c items=0 ppid=2420 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:12.395000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Mar 17 18:42:12.395000 audit[2550]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2550 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:42:12.395000 audit[2550]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff8e292600 a2=0 a3=7fff8e2925ec items=0 ppid=2420 pid=2550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:12.395000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Mar 17 18:42:12.399000 audit[2552]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2552 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:42:12.399000 audit[2552]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffec6ff63d0 a2=0 a3=7ffec6ff63bc items=0 ppid=2420 pid=2552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:12.399000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Mar 17 18:42:12.409000 audit[2555]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2555 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:42:12.409000 audit[2555]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fffb37189e0 a2=0 a3=7fffb37189cc items=0 ppid=2420 pid=2555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:12.409000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Mar 17 18:42:12.415000 audit[2557]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2557 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Mar 17 18:42:12.415000 audit[2557]: SYSCALL arch=c000003e syscall=46 success=yes exit=2004 a0=3 a1=7ffecd669210 a2=0 a3=7ffecd6691fc items=0 ppid=2420 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:12.415000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:42:12.415000 audit[2557]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2557 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Mar 17 18:42:12.415000 audit[2557]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffecd669210 a2=0 a3=7ffecd6691fc items=0 ppid=2420 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:12.415000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:42:13.498314 env[1309]: time="2025-03-17T18:42:13.498106435Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.36.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:13.506149 env[1309]: time="2025-03-17T18:42:13.506062623Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:13.509419 env[1309]: time="2025-03-17T18:42:13.509344967Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.36.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:13.512295 env[1309]: time="2025-03-17T18:42:13.512231101Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:13.513142 env[1309]: time="2025-03-17T18:42:13.513087610Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Mar 17 18:42:13.526370 env[1309]: time="2025-03-17T18:42:13.526294292Z" level=info msg="CreateContainer within sandbox \"b8f62570490ee4cb3d7154ebbb190fceac17b9a6325a13d2c5862bef0f7961cb\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 17 18:42:13.568905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3226065949.mount: Deactivated successfully. Mar 17 18:42:13.585742 env[1309]: time="2025-03-17T18:42:13.585649667Z" level=info msg="CreateContainer within sandbox \"b8f62570490ee4cb3d7154ebbb190fceac17b9a6325a13d2c5862bef0f7961cb\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c896e5b07282fc3ee4b6108d6ae956a635cd5f5f6183b5857a1296ad9d167936\"" Mar 17 18:42:13.586697 env[1309]: time="2025-03-17T18:42:13.586644497Z" level=info msg="StartContainer for \"c896e5b07282fc3ee4b6108d6ae956a635cd5f5f6183b5857a1296ad9d167936\"" Mar 17 18:42:13.665464 env[1309]: time="2025-03-17T18:42:13.665406452Z" level=info msg="StartContainer for \"c896e5b07282fc3ee4b6108d6ae956a635cd5f5f6183b5857a1296ad9d167936\" returns successfully" Mar 17 18:42:14.218505 kubelet[2216]: I0317 18:42:14.218199 2216 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-265qh" podStartSLOduration=5.21817496 podStartE2EDuration="5.21817496s" podCreationTimestamp="2025-03-17 18:42:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:42:12.202688303 +0000 UTC m=+17.166257380" watchObservedRunningTime="2025-03-17 18:42:14.21817496 +0000 UTC m=+19.181744006" Mar 17 18:42:14.218505 kubelet[2216]: I0317 18:42:14.218322 2216 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-5kq5k" podStartSLOduration=1.275187907 podStartE2EDuration="4.218316718s" podCreationTimestamp="2025-03-17 18:42:10 +0000 UTC" firstStartedPulling="2025-03-17 18:42:10.571654144 +0000 UTC m=+15.535223190" lastFinishedPulling="2025-03-17 18:42:13.514782955 +0000 UTC m=+18.478352001" observedRunningTime="2025-03-17 18:42:14.213338944 +0000 UTC m=+19.176908020" watchObservedRunningTime="2025-03-17 18:42:14.218316718 +0000 UTC m=+19.181885774" Mar 17 18:42:16.554000 audit[2601]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2601 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:42:16.554000 audit[2601]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffe3c3681c0 a2=0 a3=7ffe3c3681ac items=0 ppid=2420 pid=2601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:16.554000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:42:16.570000 audit[2601]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2601 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:42:16.570000 audit[2601]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe3c3681c0 a2=0 a3=0 items=0 ppid=2420 pid=2601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:16.570000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:42:16.582000 audit[2603]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2603 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:42:16.582000 audit[2603]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffcbbdd5b10 a2=0 a3=7ffcbbdd5afc items=0 ppid=2420 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:16.582000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:42:16.586000 audit[2603]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2603 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:42:16.586000 audit[2603]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcbbdd5b10 a2=0 a3=0 items=0 ppid=2420 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:16.586000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:42:16.696541 kubelet[2216]: I0317 18:42:16.696485 2216 topology_manager.go:215] "Topology Admit Handler" podUID="bc790ab0-6d1f-43e1-b9d2-fcf48170e79e" podNamespace="calico-system" podName="calico-typha-687fd4bfbc-gzjg8" Mar 17 18:42:16.734079 kubelet[2216]: I0317 18:42:16.734032 2216 topology_manager.go:215] "Topology Admit Handler" podUID="8fbc69a3-8524-4440-a2d9-778d5b73de1b" podNamespace="calico-system" podName="calico-node-lr4b4" Mar 17 18:42:16.801960 kubelet[2216]: I0317 18:42:16.801912 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/bc790ab0-6d1f-43e1-b9d2-fcf48170e79e-typha-certs\") pod \"calico-typha-687fd4bfbc-gzjg8\" (UID: \"bc790ab0-6d1f-43e1-b9d2-fcf48170e79e\") " pod="calico-system/calico-typha-687fd4bfbc-gzjg8" Mar 17 18:42:16.802147 kubelet[2216]: I0317 18:42:16.801973 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc790ab0-6d1f-43e1-b9d2-fcf48170e79e-tigera-ca-bundle\") pod \"calico-typha-687fd4bfbc-gzjg8\" (UID: \"bc790ab0-6d1f-43e1-b9d2-fcf48170e79e\") " pod="calico-system/calico-typha-687fd4bfbc-gzjg8" Mar 17 18:42:16.802147 kubelet[2216]: I0317 18:42:16.802002 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhbh7\" (UniqueName: \"kubernetes.io/projected/bc790ab0-6d1f-43e1-b9d2-fcf48170e79e-kube-api-access-jhbh7\") pod \"calico-typha-687fd4bfbc-gzjg8\" (UID: \"bc790ab0-6d1f-43e1-b9d2-fcf48170e79e\") " pod="calico-system/calico-typha-687fd4bfbc-gzjg8" Mar 17 18:42:16.851637 kubelet[2216]: I0317 18:42:16.851497 2216 topology_manager.go:215] "Topology Admit Handler" podUID="06aa47e7-14c4-4c99-9d64-88ed0bae7c98" podNamespace="calico-system" podName="csi-node-driver-rwbh5" Mar 17 18:42:16.852040 kubelet[2216]: E0317 18:42:16.852002 2216 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwbh5" podUID="06aa47e7-14c4-4c99-9d64-88ed0bae7c98" Mar 17 18:42:16.902885 kubelet[2216]: I0317 18:42:16.902798 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mssg2\" (UniqueName: \"kubernetes.io/projected/8fbc69a3-8524-4440-a2d9-778d5b73de1b-kube-api-access-mssg2\") pod \"calico-node-lr4b4\" (UID: \"8fbc69a3-8524-4440-a2d9-778d5b73de1b\") " pod="calico-system/calico-node-lr4b4" Mar 17 18:42:16.903075 kubelet[2216]: I0317 18:42:16.902909 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8fbc69a3-8524-4440-a2d9-778d5b73de1b-lib-modules\") pod \"calico-node-lr4b4\" (UID: \"8fbc69a3-8524-4440-a2d9-778d5b73de1b\") " pod="calico-system/calico-node-lr4b4" Mar 17 18:42:16.903075 kubelet[2216]: I0317 18:42:16.902939 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fbc69a3-8524-4440-a2d9-778d5b73de1b-tigera-ca-bundle\") pod \"calico-node-lr4b4\" (UID: \"8fbc69a3-8524-4440-a2d9-778d5b73de1b\") " pod="calico-system/calico-node-lr4b4" Mar 17 18:42:16.903075 kubelet[2216]: I0317 18:42:16.902963 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8fbc69a3-8524-4440-a2d9-778d5b73de1b-var-run-calico\") pod \"calico-node-lr4b4\" (UID: \"8fbc69a3-8524-4440-a2d9-778d5b73de1b\") " pod="calico-system/calico-node-lr4b4" Mar 17 18:42:16.903075 kubelet[2216]: I0317 18:42:16.902990 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8fbc69a3-8524-4440-a2d9-778d5b73de1b-cni-log-dir\") pod \"calico-node-lr4b4\" (UID: \"8fbc69a3-8524-4440-a2d9-778d5b73de1b\") " pod="calico-system/calico-node-lr4b4" Mar 17 18:42:16.903075 kubelet[2216]: I0317 18:42:16.903010 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8fbc69a3-8524-4440-a2d9-778d5b73de1b-flexvol-driver-host\") pod \"calico-node-lr4b4\" (UID: \"8fbc69a3-8524-4440-a2d9-778d5b73de1b\") " pod="calico-system/calico-node-lr4b4" Mar 17 18:42:16.903216 kubelet[2216]: I0317 18:42:16.903032 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8fbc69a3-8524-4440-a2d9-778d5b73de1b-cni-bin-dir\") pod \"calico-node-lr4b4\" (UID: \"8fbc69a3-8524-4440-a2d9-778d5b73de1b\") " pod="calico-system/calico-node-lr4b4" Mar 17 18:42:16.903216 kubelet[2216]: I0317 18:42:16.903053 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8fbc69a3-8524-4440-a2d9-778d5b73de1b-node-certs\") pod \"calico-node-lr4b4\" (UID: \"8fbc69a3-8524-4440-a2d9-778d5b73de1b\") " pod="calico-system/calico-node-lr4b4" Mar 17 18:42:16.903216 kubelet[2216]: I0317 18:42:16.903099 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8fbc69a3-8524-4440-a2d9-778d5b73de1b-var-lib-calico\") pod \"calico-node-lr4b4\" (UID: \"8fbc69a3-8524-4440-a2d9-778d5b73de1b\") " pod="calico-system/calico-node-lr4b4" Mar 17 18:42:16.903216 kubelet[2216]: I0317 18:42:16.903138 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8fbc69a3-8524-4440-a2d9-778d5b73de1b-cni-net-dir\") pod \"calico-node-lr4b4\" (UID: \"8fbc69a3-8524-4440-a2d9-778d5b73de1b\") " pod="calico-system/calico-node-lr4b4" Mar 17 18:42:16.903216 kubelet[2216]: I0317 18:42:16.903160 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8fbc69a3-8524-4440-a2d9-778d5b73de1b-xtables-lock\") pod \"calico-node-lr4b4\" (UID: \"8fbc69a3-8524-4440-a2d9-778d5b73de1b\") " pod="calico-system/calico-node-lr4b4" Mar 17 18:42:16.903356 kubelet[2216]: I0317 18:42:16.903180 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8fbc69a3-8524-4440-a2d9-778d5b73de1b-policysync\") pod \"calico-node-lr4b4\" (UID: \"8fbc69a3-8524-4440-a2d9-778d5b73de1b\") " pod="calico-system/calico-node-lr4b4" Mar 17 18:42:17.001005 kubelet[2216]: E0317 18:42:17.000954 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:17.001756 env[1309]: time="2025-03-17T18:42:17.001696978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-687fd4bfbc-gzjg8,Uid:bc790ab0-6d1f-43e1-b9d2-fcf48170e79e,Namespace:calico-system,Attempt:0,}" Mar 17 18:42:17.003558 kubelet[2216]: I0317 18:42:17.003516 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/06aa47e7-14c4-4c99-9d64-88ed0bae7c98-varrun\") pod \"csi-node-driver-rwbh5\" (UID: \"06aa47e7-14c4-4c99-9d64-88ed0bae7c98\") " pod="calico-system/csi-node-driver-rwbh5" Mar 17 18:42:17.003679 kubelet[2216]: I0317 18:42:17.003565 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/06aa47e7-14c4-4c99-9d64-88ed0bae7c98-kubelet-dir\") pod \"csi-node-driver-rwbh5\" (UID: \"06aa47e7-14c4-4c99-9d64-88ed0bae7c98\") " pod="calico-system/csi-node-driver-rwbh5" Mar 17 18:42:17.003679 kubelet[2216]: I0317 18:42:17.003624 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/06aa47e7-14c4-4c99-9d64-88ed0bae7c98-socket-dir\") pod \"csi-node-driver-rwbh5\" (UID: \"06aa47e7-14c4-4c99-9d64-88ed0bae7c98\") " pod="calico-system/csi-node-driver-rwbh5" Mar 17 18:42:17.003679 kubelet[2216]: I0317 18:42:17.003657 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/06aa47e7-14c4-4c99-9d64-88ed0bae7c98-registration-dir\") pod \"csi-node-driver-rwbh5\" (UID: \"06aa47e7-14c4-4c99-9d64-88ed0bae7c98\") " pod="calico-system/csi-node-driver-rwbh5" Mar 17 18:42:17.003820 kubelet[2216]: I0317 18:42:17.003686 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgztq\" (UniqueName: \"kubernetes.io/projected/06aa47e7-14c4-4c99-9d64-88ed0bae7c98-kube-api-access-qgztq\") pod \"csi-node-driver-rwbh5\" (UID: \"06aa47e7-14c4-4c99-9d64-88ed0bae7c98\") " pod="calico-system/csi-node-driver-rwbh5" Mar 17 18:42:17.011912 kubelet[2216]: E0317 18:42:17.009128 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:17.011912 kubelet[2216]: W0317 18:42:17.009155 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:17.011912 kubelet[2216]: E0317 18:42:17.009178 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:17.025005 kubelet[2216]: E0317 18:42:17.024898 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:17.025005 kubelet[2216]: W0317 18:42:17.024932 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:17.025005 kubelet[2216]: E0317 18:42:17.024957 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:17.032724 env[1309]: time="2025-03-17T18:42:17.032466599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:42:17.032724 env[1309]: time="2025-03-17T18:42:17.032518597Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:42:17.032724 env[1309]: time="2025-03-17T18:42:17.032533134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:42:17.032986 env[1309]: time="2025-03-17T18:42:17.032808544Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5ed2ea54ae6a5a1955a6ed10c3af7b2b308e92f330244d1647b0eccdb44a7fbe pid=2616 runtime=io.containerd.runc.v2 Mar 17 18:42:17.038084 kubelet[2216]: E0317 18:42:17.038051 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:17.040193 env[1309]: time="2025-03-17T18:42:17.040107935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lr4b4,Uid:8fbc69a3-8524-4440-a2d9-778d5b73de1b,Namespace:calico-system,Attempt:0,}" Mar 17 18:42:17.062711 env[1309]: time="2025-03-17T18:42:17.061913764Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:42:17.062711 env[1309]: time="2025-03-17T18:42:17.061959121Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:42:17.062711 env[1309]: time="2025-03-17T18:42:17.061968498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:42:17.062711 env[1309]: time="2025-03-17T18:42:17.062171871Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/546d66e230280017c9e83f127fea57d74b165ca11c271d28d9e919c656ba2223 pid=2649 runtime=io.containerd.runc.v2 Mar 17 18:42:17.090108 env[1309]: time="2025-03-17T18:42:17.090043806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-687fd4bfbc-gzjg8,Uid:bc790ab0-6d1f-43e1-b9d2-fcf48170e79e,Namespace:calico-system,Attempt:0,} returns sandbox id \"5ed2ea54ae6a5a1955a6ed10c3af7b2b308e92f330244d1647b0eccdb44a7fbe\"" Mar 17 18:42:17.090956 kubelet[2216]: E0317 18:42:17.090926 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:17.091918 env[1309]: time="2025-03-17T18:42:17.091694369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Mar 17 18:42:17.107637 kubelet[2216]: E0317 18:42:17.107422 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:17.107637 kubelet[2216]: W0317 18:42:17.107446 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:17.107637 kubelet[2216]: E0317 18:42:17.107475 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:17.110281 kubelet[2216]: E0317 18:42:17.110242 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:17.110281 kubelet[2216]: W0317 18:42:17.110270 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:17.110418 kubelet[2216]: E0317 18:42:17.110294 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:17.110550 kubelet[2216]: E0317 18:42:17.110527 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:17.110550 kubelet[2216]: W0317 18:42:17.110546 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:17.110632 kubelet[2216]: E0317 18:42:17.110559 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:17.110766 kubelet[2216]: E0317 18:42:17.110745 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:17.110825 kubelet[2216]: W0317 18:42:17.110777 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:17.110825 kubelet[2216]: E0317 18:42:17.110806 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:17.111067 kubelet[2216]: E0317 18:42:17.111050 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:17.111130 kubelet[2216]: W0317 18:42:17.111066 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:17.111130 kubelet[2216]: E0317 18:42:17.111088 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:17.111334 kubelet[2216]: E0317 18:42:17.111311 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:17.111334 kubelet[2216]: W0317 18:42:17.111324 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:17.111419 kubelet[2216]: E0317 18:42:17.111339 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:17.111562 kubelet[2216]: E0317 18:42:17.111540 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:17.111562 kubelet[2216]: W0317 18:42:17.111553 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:17.111645 kubelet[2216]: E0317 18:42:17.111569 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:17.111681 env[1309]: time="2025-03-17T18:42:17.111589064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lr4b4,Uid:8fbc69a3-8524-4440-a2d9-778d5b73de1b,Namespace:calico-system,Attempt:0,} returns sandbox id \"546d66e230280017c9e83f127fea57d74b165ca11c271d28d9e919c656ba2223\"" Mar 17 18:42:17.112067 kubelet[2216]: E0317 18:42:17.112042 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:17.112067 kubelet[2216]: W0317 18:42:17.112056 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:17.112163 kubelet[2216]: E0317 18:42:17.112071 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:17.112284 kubelet[2216]: E0317 18:42:17.112271 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:17.112284 kubelet[2216]: W0317 18:42:17.112281 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:17.112366 kubelet[2216]: E0317 18:42:17.112323 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:17.112499 kubelet[2216]: E0317 18:42:17.112485 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:17.112499 kubelet[2216]: W0317 18:42:17.112496 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:17.112713 kubelet[2216]: E0317 18:42:17.112589 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:17.113030 kubelet[2216]: E0317 18:42:17.112686 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:17.113095 kubelet[2216]: W0317 18:42:17.113041 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:17.113131 kubelet[2216]: E0317 18:42:17.113091 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:17.113328 kubelet[2216]: E0317 18:42:17.113310 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:17.113328 kubelet[2216]: W0317 18:42:17.113323 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:17.113515 kubelet[2216]: E0317 18:42:17.113497 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:17.113515 kubelet[2216]: W0317 18:42:17.113509 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:17.114535 kubelet[2216]: E0317 18:42:17.113659 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:17.114535 kubelet[2216]: E0317 18:42:17.113680 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:17.114535 kubelet[2216]: E0317 18:42:17.113690 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:17.114535 kubelet[2216]: E0317 18:42:17.113995 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:17.114535 kubelet[2216]: W0317 18:42:17.114005 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:17.114535 kubelet[2216]: E0317 18:42:17.114159 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:17.114535 kubelet[2216]: W0317 18:42:17.114167 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:17.114535 kubelet[2216]: E0317 18:42:17.114296 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:17.114535 kubelet[2216]: W0317 18:42:17.114303 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:17.114535 kubelet[2216]: E0317 18:42:17.114317 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:17.114902 kubelet[2216]: E0317 18:42:17.114466 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:17.114902 kubelet[2216]: W0317 18:42:17.114473 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:17.114902 kubelet[2216]: E0317 18:42:17.114481 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:17.114902 kubelet[2216]: E0317 18:42:17.114665 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:17.114902 kubelet[2216]: W0317 18:42:17.114674 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:17.114902 kubelet[2216]: E0317 18:42:17.114682 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:17.114902 kubelet[2216]: E0317 18:42:17.114890 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:17.114902 kubelet[2216]: W0317 18:42:17.114898 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:17.114902 kubelet[2216]: E0317 18:42:17.114907 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:17.115308 kubelet[2216]: E0317 18:42:17.115089 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:17.115308 kubelet[2216]: W0317 18:42:17.115098 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:17.115308 kubelet[2216]: E0317 18:42:17.115106 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:17.115537 kubelet[2216]: E0317 18:42:17.115516 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:17.115537 kubelet[2216]: W0317 18:42:17.115533 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:17.115623 kubelet[2216]: E0317 18:42:17.115546 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:17.115717 kubelet[2216]: E0317 18:42:17.115702 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:17.115717 kubelet[2216]: W0317 18:42:17.115714 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:17.115777 kubelet[2216]: E0317 18:42:17.115723 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:17.116173 kubelet[2216]: E0317 18:42:17.116144 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:17.116238 kubelet[2216]: E0317 18:42:17.116179 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:17.116331 kubelet[2216]: W0317 18:42:17.116303 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:17.116364 kubelet[2216]: E0317 18:42:17.116334 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:17.116985 kubelet[2216]: E0317 18:42:17.116955 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:17.120091 kubelet[2216]: E0317 18:42:17.120060 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:17.120167 kubelet[2216]: W0317 18:42:17.120089 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:17.120167 kubelet[2216]: E0317 18:42:17.120116 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:17.121097 kubelet[2216]: E0317 18:42:17.120578 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:17.121097 kubelet[2216]: W0317 18:42:17.120706 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:17.121097 kubelet[2216]: E0317 18:42:17.120717 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:17.123773 kubelet[2216]: E0317 18:42:17.123714 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:17.123773 kubelet[2216]: W0317 18:42:17.123729 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:17.123773 kubelet[2216]: E0317 18:42:17.123743 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:17.594000 audit[2717]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2717 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:42:17.596678 kernel: kauditd_printk_skb: 155 callbacks suppressed Mar 17 18:42:17.596732 kernel: audit: type=1325 audit(1742236937.594:270): table=filter:93 family=2 entries=17 op=nft_register_rule pid=2717 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:42:17.594000 audit[2717]: SYSCALL arch=c000003e syscall=46 success=yes exit=6652 a0=3 a1=7ffcf9015ef0 a2=0 a3=7ffcf9015edc items=0 ppid=2420 pid=2717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:17.604580 kernel: audit: type=1300 audit(1742236937.594:270): arch=c000003e syscall=46 success=yes exit=6652 a0=3 a1=7ffcf9015ef0 a2=0 a3=7ffcf9015edc items=0 ppid=2420 pid=2717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:17.604700 kernel: audit: type=1327 audit(1742236937.594:270): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:42:17.594000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:42:17.608000 audit[2717]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2717 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:42:17.608000 audit[2717]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcf9015ef0 a2=0 a3=0 items=0 ppid=2420 pid=2717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:17.616738 kernel: audit: type=1325 audit(1742236937.608:271): table=nat:94 family=2 entries=12 op=nft_register_rule pid=2717 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:42:17.616823 kernel: audit: type=1300 audit(1742236937.608:271): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcf9015ef0 a2=0 a3=0 items=0 ppid=2420 pid=2717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:17.616865 kernel: audit: type=1327 audit(1742236937.608:271): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:42:17.608000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:42:18.669861 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2973526974.mount: Deactivated successfully. Mar 17 18:42:19.117336 kubelet[2216]: E0317 18:42:19.116992 2216 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwbh5" podUID="06aa47e7-14c4-4c99-9d64-88ed0bae7c98" Mar 17 18:42:19.664924 env[1309]: time="2025-03-17T18:42:19.664876608Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:19.667116 env[1309]: time="2025-03-17T18:42:19.667064953Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:19.668372 env[1309]: time="2025-03-17T18:42:19.668341530Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:19.669787 env[1309]: time="2025-03-17T18:42:19.669727974Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:19.670221 env[1309]: time="2025-03-17T18:42:19.670187891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Mar 17 18:42:19.671195 env[1309]: time="2025-03-17T18:42:19.671170884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Mar 17 18:42:19.680258 env[1309]: time="2025-03-17T18:42:19.680215025Z" level=info msg="CreateContainer within sandbox \"5ed2ea54ae6a5a1955a6ed10c3af7b2b308e92f330244d1647b0eccdb44a7fbe\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 17 18:42:19.697326 env[1309]: time="2025-03-17T18:42:19.697246592Z" level=info msg="CreateContainer within sandbox \"5ed2ea54ae6a5a1955a6ed10c3af7b2b308e92f330244d1647b0eccdb44a7fbe\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9d828e68c51750f14c165dfd21a0f1a7cfa9f7f37f27d70c91bf5b0acf99ce2d\"" Mar 17 18:42:19.697760 env[1309]: time="2025-03-17T18:42:19.697724463Z" level=info msg="StartContainer for \"9d828e68c51750f14c165dfd21a0f1a7cfa9f7f37f27d70c91bf5b0acf99ce2d\"" Mar 17 18:42:19.757671 env[1309]: time="2025-03-17T18:42:19.757626522Z" level=info msg="StartContainer for \"9d828e68c51750f14c165dfd21a0f1a7cfa9f7f37f27d70c91bf5b0acf99ce2d\" returns successfully" Mar 17 18:42:20.212579 kubelet[2216]: E0317 18:42:20.212538 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:20.220531 kubelet[2216]: I0317 18:42:20.220472 2216 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-687fd4bfbc-gzjg8" podStartSLOduration=1.640923276 podStartE2EDuration="4.220454414s" podCreationTimestamp="2025-03-17 18:42:16 +0000 UTC" firstStartedPulling="2025-03-17 18:42:17.091416295 +0000 UTC m=+22.054985341" lastFinishedPulling="2025-03-17 18:42:19.670947433 +0000 UTC m=+24.634516479" observedRunningTime="2025-03-17 18:42:20.220018152 +0000 UTC m=+25.183587198" watchObservedRunningTime="2025-03-17 18:42:20.220454414 +0000 UTC m=+25.184023460" Mar 17 18:42:20.228502 kubelet[2216]: E0317 18:42:20.228476 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:20.228502 kubelet[2216]: W0317 18:42:20.228495 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:20.228607 kubelet[2216]: E0317 18:42:20.228512 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:20.228649 kubelet[2216]: E0317 18:42:20.228633 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:20.228649 kubelet[2216]: W0317 18:42:20.228642 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:20.228728 kubelet[2216]: E0317 18:42:20.228651 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:20.228898 kubelet[2216]: E0317 18:42:20.228882 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:20.228898 kubelet[2216]: W0317 18:42:20.228893 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:20.228898 kubelet[2216]: E0317 18:42:20.228901 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:20.229075 kubelet[2216]: E0317 18:42:20.229051 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:20.229075 kubelet[2216]: W0317 18:42:20.229064 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:20.229075 kubelet[2216]: E0317 18:42:20.229073 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:20.229295 kubelet[2216]: E0317 18:42:20.229220 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:20.229295 kubelet[2216]: W0317 18:42:20.229227 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:20.229295 kubelet[2216]: E0317 18:42:20.229235 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:20.229442 kubelet[2216]: E0317 18:42:20.229408 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:20.229442 kubelet[2216]: W0317 18:42:20.229433 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:20.229498 kubelet[2216]: E0317 18:42:20.229458 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:20.229699 kubelet[2216]: E0317 18:42:20.229677 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:20.229699 kubelet[2216]: W0317 18:42:20.229694 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:20.229765 kubelet[2216]: E0317 18:42:20.229705 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:20.229927 kubelet[2216]: E0317 18:42:20.229913 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:20.229955 kubelet[2216]: W0317 18:42:20.229926 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:20.229955 kubelet[2216]: E0317 18:42:20.229939 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:20.230151 kubelet[2216]: E0317 18:42:20.230138 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:20.230178 kubelet[2216]: W0317 18:42:20.230150 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:20.230178 kubelet[2216]: E0317 18:42:20.230160 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:20.230327 kubelet[2216]: E0317 18:42:20.230314 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:20.230353 kubelet[2216]: W0317 18:42:20.230326 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:20.230353 kubelet[2216]: E0317 18:42:20.230339 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:20.230532 kubelet[2216]: E0317 18:42:20.230519 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:20.230557 kubelet[2216]: W0317 18:42:20.230531 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:20.230557 kubelet[2216]: E0317 18:42:20.230541 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:20.230719 kubelet[2216]: E0317 18:42:20.230705 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:20.230757 kubelet[2216]: W0317 18:42:20.230718 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:20.230757 kubelet[2216]: E0317 18:42:20.230728 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:20.230928 kubelet[2216]: E0317 18:42:20.230914 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:20.230928 kubelet[2216]: W0317 18:42:20.230926 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:20.230983 kubelet[2216]: E0317 18:42:20.230937 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:20.231132 kubelet[2216]: E0317 18:42:20.231115 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:20.231132 kubelet[2216]: W0317 18:42:20.231127 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:20.231211 kubelet[2216]: E0317 18:42:20.231137 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:20.231320 kubelet[2216]: E0317 18:42:20.231307 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:20.231351 kubelet[2216]: W0317 18:42:20.231320 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:20.231351 kubelet[2216]: E0317 18:42:20.231329 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:20.236552 kubelet[2216]: E0317 18:42:20.236526 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:20.236552 kubelet[2216]: W0317 18:42:20.236540 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:20.236552 kubelet[2216]: E0317 18:42:20.236549 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:20.236715 kubelet[2216]: E0317 18:42:20.236699 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:20.236715 kubelet[2216]: W0317 18:42:20.236710 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:20.236797 kubelet[2216]: E0317 18:42:20.236742 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:20.236912 kubelet[2216]: E0317 18:42:20.236894 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:20.236912 kubelet[2216]: W0317 18:42:20.236906 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:20.236987 kubelet[2216]: E0317 18:42:20.236923 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:20.237091 kubelet[2216]: E0317 18:42:20.237069 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:20.237091 kubelet[2216]: W0317 18:42:20.237081 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:20.237143 kubelet[2216]: E0317 18:42:20.237093 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:20.237317 kubelet[2216]: E0317 18:42:20.237295 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:20.237317 kubelet[2216]: W0317 18:42:20.237313 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:20.237388 kubelet[2216]: E0317 18:42:20.237331 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:20.237527 kubelet[2216]: E0317 18:42:20.237511 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:20.237527 kubelet[2216]: W0317 18:42:20.237524 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:20.237594 kubelet[2216]: E0317 18:42:20.237541 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:20.237746 kubelet[2216]: E0317 18:42:20.237712 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:20.237746 kubelet[2216]: W0317 18:42:20.237726 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:20.237807 kubelet[2216]: E0317 18:42:20.237748 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:20.237925 kubelet[2216]: E0317 18:42:20.237910 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:20.237925 kubelet[2216]: W0317 18:42:20.237923 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:20.237986 kubelet[2216]: E0317 18:42:20.237936 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:20.238060 kubelet[2216]: E0317 18:42:20.238045 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:20.238060 kubelet[2216]: W0317 18:42:20.238054 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:20.238134 kubelet[2216]: E0317 18:42:20.238066 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:20.238310 kubelet[2216]: E0317 18:42:20.238296 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:20.238310 kubelet[2216]: W0317 18:42:20.238306 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:20.238370 kubelet[2216]: E0317 18:42:20.238327 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:20.238480 kubelet[2216]: E0317 18:42:20.238466 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:20.238480 kubelet[2216]: W0317 18:42:20.238475 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:20.238557 kubelet[2216]: E0317 18:42:20.238493 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:20.238642 kubelet[2216]: E0317 18:42:20.238626 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:20.238669 kubelet[2216]: W0317 18:42:20.238640 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:20.238669 kubelet[2216]: E0317 18:42:20.238653 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:20.238929 kubelet[2216]: E0317 18:42:20.238916 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:20.238929 kubelet[2216]: W0317 18:42:20.238926 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:20.238998 kubelet[2216]: E0317 18:42:20.238938 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:20.239218 kubelet[2216]: E0317 18:42:20.239199 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:20.239218 kubelet[2216]: W0317 18:42:20.239215 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:20.239302 kubelet[2216]: E0317 18:42:20.239240 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:20.239441 kubelet[2216]: E0317 18:42:20.239424 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:20.239441 kubelet[2216]: W0317 18:42:20.239438 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:20.239514 kubelet[2216]: E0317 18:42:20.239455 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:20.239642 kubelet[2216]: E0317 18:42:20.239627 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:20.239642 kubelet[2216]: W0317 18:42:20.239638 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:20.239691 kubelet[2216]: E0317 18:42:20.239648 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:20.239804 kubelet[2216]: E0317 18:42:20.239790 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:20.239804 kubelet[2216]: W0317 18:42:20.239803 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:20.239871 kubelet[2216]: E0317 18:42:20.239818 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:20.240022 kubelet[2216]: E0317 18:42:20.240004 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:20.240022 kubelet[2216]: W0317 18:42:20.240019 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:20.240098 kubelet[2216]: E0317 18:42:20.240032 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:21.118273 kubelet[2216]: E0317 18:42:21.118159 2216 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwbh5" podUID="06aa47e7-14c4-4c99-9d64-88ed0bae7c98" Mar 17 18:42:21.213634 kubelet[2216]: I0317 18:42:21.213607 2216 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 18:42:21.214476 kubelet[2216]: E0317 18:42:21.214457 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:21.240291 kubelet[2216]: E0317 18:42:21.240154 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:21.240291 kubelet[2216]: W0317 18:42:21.240180 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:21.240291 kubelet[2216]: E0317 18:42:21.240203 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:21.240820 kubelet[2216]: E0317 18:42:21.240788 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:21.240962 kubelet[2216]: W0317 18:42:21.240819 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:21.240962 kubelet[2216]: E0317 18:42:21.240849 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:21.241223 kubelet[2216]: E0317 18:42:21.241204 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:21.241223 kubelet[2216]: W0317 18:42:21.241219 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:21.241323 kubelet[2216]: E0317 18:42:21.241231 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:21.241456 kubelet[2216]: E0317 18:42:21.241436 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:21.241456 kubelet[2216]: W0317 18:42:21.241450 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:21.241610 kubelet[2216]: E0317 18:42:21.241461 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:21.241805 kubelet[2216]: E0317 18:42:21.241786 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:21.241805 kubelet[2216]: W0317 18:42:21.241800 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:21.241935 kubelet[2216]: E0317 18:42:21.241810 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:21.242456 kubelet[2216]: E0317 18:42:21.242430 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:21.242535 kubelet[2216]: W0317 18:42:21.242455 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:21.242535 kubelet[2216]: E0317 18:42:21.242480 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:21.242842 kubelet[2216]: E0317 18:42:21.242815 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:21.242842 kubelet[2216]: W0317 18:42:21.242828 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:21.242842 kubelet[2216]: E0317 18:42:21.242840 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:21.243146 kubelet[2216]: E0317 18:42:21.243099 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:21.243184 kubelet[2216]: W0317 18:42:21.243157 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:21.243184 kubelet[2216]: E0317 18:42:21.243170 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:21.243430 kubelet[2216]: E0317 18:42:21.243407 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:21.243430 kubelet[2216]: W0317 18:42:21.243422 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:21.243522 kubelet[2216]: E0317 18:42:21.243434 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:21.243748 kubelet[2216]: E0317 18:42:21.243699 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:21.243748 kubelet[2216]: W0317 18:42:21.243742 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:21.243921 kubelet[2216]: E0317 18:42:21.243770 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:21.244101 kubelet[2216]: E0317 18:42:21.244071 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:21.244101 kubelet[2216]: W0317 18:42:21.244088 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:21.244101 kubelet[2216]: E0317 18:42:21.244102 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:21.244367 kubelet[2216]: E0317 18:42:21.244350 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:21.244367 kubelet[2216]: W0317 18:42:21.244362 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:21.244474 kubelet[2216]: E0317 18:42:21.244372 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:21.244623 kubelet[2216]: E0317 18:42:21.244601 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:21.244623 kubelet[2216]: W0317 18:42:21.244615 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:21.244770 kubelet[2216]: E0317 18:42:21.244628 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:21.244909 kubelet[2216]: E0317 18:42:21.244889 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:21.244909 kubelet[2216]: W0317 18:42:21.244905 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:21.245015 kubelet[2216]: E0317 18:42:21.244918 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:21.245148 kubelet[2216]: E0317 18:42:21.245130 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:21.245148 kubelet[2216]: W0317 18:42:21.245144 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:21.245248 kubelet[2216]: E0317 18:42:21.245155 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:21.245471 kubelet[2216]: E0317 18:42:21.245447 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:21.245471 kubelet[2216]: W0317 18:42:21.245462 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:21.245471 kubelet[2216]: E0317 18:42:21.245472 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:21.245775 kubelet[2216]: E0317 18:42:21.245752 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:21.245775 kubelet[2216]: W0317 18:42:21.245766 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:21.245921 kubelet[2216]: E0317 18:42:21.245784 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:21.246319 kubelet[2216]: E0317 18:42:21.246126 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:21.246319 kubelet[2216]: W0317 18:42:21.246155 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:21.246319 kubelet[2216]: E0317 18:42:21.246183 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:21.246535 kubelet[2216]: E0317 18:42:21.246510 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:21.246535 kubelet[2216]: W0317 18:42:21.246523 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:21.246535 kubelet[2216]: E0317 18:42:21.246539 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:21.246775 kubelet[2216]: E0317 18:42:21.246736 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:21.246775 kubelet[2216]: W0317 18:42:21.246745 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:21.246775 kubelet[2216]: E0317 18:42:21.246754 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:21.247048 kubelet[2216]: E0317 18:42:21.247031 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:21.247048 kubelet[2216]: W0317 18:42:21.247044 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:21.247151 kubelet[2216]: E0317 18:42:21.247131 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:21.247451 kubelet[2216]: E0317 18:42:21.247426 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:21.247451 kubelet[2216]: W0317 18:42:21.247439 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:21.247522 kubelet[2216]: E0317 18:42:21.247500 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:21.247688 kubelet[2216]: E0317 18:42:21.247624 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:21.247688 kubelet[2216]: W0317 18:42:21.247638 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:21.247688 kubelet[2216]: E0317 18:42:21.247662 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:21.247823 kubelet[2216]: E0317 18:42:21.247805 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:21.247823 kubelet[2216]: W0317 18:42:21.247814 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:21.247908 kubelet[2216]: E0317 18:42:21.247834 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:21.248171 kubelet[2216]: E0317 18:42:21.248150 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:21.248171 kubelet[2216]: W0317 18:42:21.248166 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:21.248262 kubelet[2216]: E0317 18:42:21.248182 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:21.248434 kubelet[2216]: E0317 18:42:21.248409 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:21.248434 kubelet[2216]: W0317 18:42:21.248423 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:21.248434 kubelet[2216]: E0317 18:42:21.248437 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:21.248657 kubelet[2216]: E0317 18:42:21.248631 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:21.248657 kubelet[2216]: W0317 18:42:21.248650 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:21.248782 kubelet[2216]: E0317 18:42:21.248671 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:21.249018 kubelet[2216]: E0317 18:42:21.248995 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:21.249070 kubelet[2216]: W0317 18:42:21.249016 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:21.249070 kubelet[2216]: E0317 18:42:21.249041 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:21.249264 kubelet[2216]: E0317 18:42:21.249246 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:21.249264 kubelet[2216]: W0317 18:42:21.249261 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:21.249354 kubelet[2216]: E0317 18:42:21.249281 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:21.249599 kubelet[2216]: E0317 18:42:21.249582 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:21.249657 kubelet[2216]: W0317 18:42:21.249596 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:21.249657 kubelet[2216]: E0317 18:42:21.249624 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:21.249970 kubelet[2216]: E0317 18:42:21.249948 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:21.250049 kubelet[2216]: W0317 18:42:21.249992 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:21.250049 kubelet[2216]: E0317 18:42:21.250021 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:21.250276 kubelet[2216]: E0317 18:42:21.250261 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:21.250276 kubelet[2216]: W0317 18:42:21.250275 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:21.250369 kubelet[2216]: E0317 18:42:21.250286 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:21.251387 kubelet[2216]: E0317 18:42:21.250635 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:42:21.251387 kubelet[2216]: W0317 18:42:21.250649 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:42:21.251387 kubelet[2216]: E0317 18:42:21.250690 2216 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:42:21.565252 env[1309]: time="2025-03-17T18:42:21.565182446Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:21.570021 env[1309]: time="2025-03-17T18:42:21.569963664Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:21.572231 env[1309]: time="2025-03-17T18:42:21.572187654Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:21.574532 env[1309]: time="2025-03-17T18:42:21.574451219Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:21.575094 env[1309]: time="2025-03-17T18:42:21.575052341Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Mar 17 18:42:21.577461 env[1309]: time="2025-03-17T18:42:21.577419210Z" level=info msg="CreateContainer within sandbox \"546d66e230280017c9e83f127fea57d74b165ca11c271d28d9e919c656ba2223\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 17 18:42:21.594136 env[1309]: time="2025-03-17T18:42:21.594071393Z" level=info msg="CreateContainer within sandbox \"546d66e230280017c9e83f127fea57d74b165ca11c271d28d9e919c656ba2223\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c5463d9ab9e623397d367c99e8df2d80c48f068c9228c838f5b80b0c703661b0\"" Mar 17 18:42:21.594661 env[1309]: time="2025-03-17T18:42:21.594622571Z" level=info msg="StartContainer for \"c5463d9ab9e623397d367c99e8df2d80c48f068c9228c838f5b80b0c703661b0\"" Mar 17 18:42:21.657344 env[1309]: time="2025-03-17T18:42:21.657287511Z" level=info msg="StartContainer for \"c5463d9ab9e623397d367c99e8df2d80c48f068c9228c838f5b80b0c703661b0\" returns successfully" Mar 17 18:42:21.687206 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c5463d9ab9e623397d367c99e8df2d80c48f068c9228c838f5b80b0c703661b0-rootfs.mount: Deactivated successfully. Mar 17 18:42:21.840667 env[1309]: time="2025-03-17T18:42:21.840509568Z" level=info msg="shim disconnected" id=c5463d9ab9e623397d367c99e8df2d80c48f068c9228c838f5b80b0c703661b0 Mar 17 18:42:21.840667 env[1309]: time="2025-03-17T18:42:21.840573208Z" level=warning msg="cleaning up after shim disconnected" id=c5463d9ab9e623397d367c99e8df2d80c48f068c9228c838f5b80b0c703661b0 namespace=k8s.io Mar 17 18:42:21.840667 env[1309]: time="2025-03-17T18:42:21.840585341Z" level=info msg="cleaning up dead shim" Mar 17 18:42:21.847419 env[1309]: time="2025-03-17T18:42:21.847364563Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:42:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2877 runtime=io.containerd.runc.v2\n" Mar 17 18:42:22.219613 kubelet[2216]: E0317 18:42:22.219550 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:22.223058 env[1309]: time="2025-03-17T18:42:22.222429104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Mar 17 18:42:23.116704 kubelet[2216]: E0317 18:42:23.116600 2216 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwbh5" podUID="06aa47e7-14c4-4c99-9d64-88ed0bae7c98" Mar 17 18:42:25.117580 kubelet[2216]: E0317 18:42:25.117498 2216 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwbh5" podUID="06aa47e7-14c4-4c99-9d64-88ed0bae7c98" Mar 17 18:42:27.120737 kubelet[2216]: E0317 18:42:27.119191 2216 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwbh5" podUID="06aa47e7-14c4-4c99-9d64-88ed0bae7c98" Mar 17 18:42:29.116729 kubelet[2216]: E0317 18:42:29.116665 2216 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwbh5" podUID="06aa47e7-14c4-4c99-9d64-88ed0bae7c98" Mar 17 18:42:30.213426 kubelet[2216]: E0317 18:42:30.213111 2216 kubelet.go:2511] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.097s" Mar 17 18:42:31.117245 kubelet[2216]: E0317 18:42:31.117184 2216 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwbh5" podUID="06aa47e7-14c4-4c99-9d64-88ed0bae7c98" Mar 17 18:42:31.916165 env[1309]: time="2025-03-17T18:42:31.916079428Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:32.124236 env[1309]: time="2025-03-17T18:42:32.124187726Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:32.239237 env[1309]: time="2025-03-17T18:42:32.239192157Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:32.275354 env[1309]: time="2025-03-17T18:42:32.275040991Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:32.278252 env[1309]: time="2025-03-17T18:42:32.278211740Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Mar 17 18:42:32.281293 env[1309]: time="2025-03-17T18:42:32.281255712Z" level=info msg="CreateContainer within sandbox \"546d66e230280017c9e83f127fea57d74b165ca11c271d28d9e919c656ba2223\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 17 18:42:32.619736 systemd[1]: Started sshd@7-10.0.0.81:22-10.0.0.1:56336.service. Mar 17 18:42:32.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.81:22-10.0.0.1:56336 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:32.624875 kernel: audit: type=1130 audit(1742236952.619:272): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.81:22-10.0.0.1:56336 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:32.658170 env[1309]: time="2025-03-17T18:42:32.658100079Z" level=info msg="CreateContainer within sandbox \"546d66e230280017c9e83f127fea57d74b165ca11c271d28d9e919c656ba2223\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"450947b9bc2e2e827996f8d04d360bd8c560b64df91796f188454627c5fbd52a\"" Mar 17 18:42:32.658700 env[1309]: time="2025-03-17T18:42:32.658672215Z" level=info msg="StartContainer for \"450947b9bc2e2e827996f8d04d360bd8c560b64df91796f188454627c5fbd52a\"" Mar 17 18:42:32.665000 audit[2899]: USER_ACCT pid=2899 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:32.671044 sshd[2899]: Accepted publickey for core from 10.0.0.1 port 56336 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:42:32.670000 audit[2899]: CRED_ACQ pid=2899 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:32.671471 sshd[2899]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:42:32.677792 kernel: audit: type=1101 audit(1742236952.665:273): pid=2899 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:32.677897 kernel: audit: type=1103 audit(1742236952.670:274): pid=2899 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:32.677922 kernel: audit: type=1006 audit(1742236952.670:275): pid=2899 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Mar 17 18:42:32.670000 audit[2899]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd48b86060 a2=3 a3=0 items=0 ppid=1 pid=2899 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:32.685058 kernel: audit: type=1300 audit(1742236952.670:275): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd48b86060 a2=3 a3=0 items=0 ppid=1 pid=2899 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:32.685122 kernel: audit: type=1327 audit(1742236952.670:275): proctitle=737368643A20636F7265205B707269765D Mar 17 18:42:32.670000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:42:32.684707 systemd[1]: Started session-8.scope. Mar 17 18:42:32.685146 systemd-logind[1293]: New session 8 of user core. Mar 17 18:42:32.698000 audit[2899]: USER_START pid=2899 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:32.704888 kernel: audit: type=1105 audit(1742236952.698:276): pid=2899 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:32.705000 audit[2927]: CRED_ACQ pid=2927 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:32.710874 kernel: audit: type=1103 audit(1742236952.705:277): pid=2927 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:32.723661 env[1309]: time="2025-03-17T18:42:32.723605070Z" level=info msg="StartContainer for \"450947b9bc2e2e827996f8d04d360bd8c560b64df91796f188454627c5fbd52a\" returns successfully" Mar 17 18:42:32.920647 sshd[2899]: pam_unix(sshd:session): session closed for user core Mar 17 18:42:32.920000 audit[2899]: USER_END pid=2899 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:32.923189 systemd[1]: sshd@7-10.0.0.81:22-10.0.0.1:56336.service: Deactivated successfully. Mar 17 18:42:32.924515 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 18:42:32.924895 systemd-logind[1293]: Session 8 logged out. Waiting for processes to exit. Mar 17 18:42:32.925651 systemd-logind[1293]: Removed session 8. Mar 17 18:42:32.920000 audit[2899]: CRED_DISP pid=2899 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:32.932346 kernel: audit: type=1106 audit(1742236952.920:278): pid=2899 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:32.932491 kernel: audit: type=1104 audit(1742236952.920:279): pid=2899 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:32.922000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.81:22-10.0.0.1:56336 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:33.788036 kubelet[2216]: E0317 18:42:33.787698 2216 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwbh5" podUID="06aa47e7-14c4-4c99-9d64-88ed0bae7c98" Mar 17 18:42:33.791568 kubelet[2216]: E0317 18:42:33.791507 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:34.792972 kubelet[2216]: E0317 18:42:34.792945 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:35.117184 kubelet[2216]: E0317 18:42:35.117055 2216 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rwbh5" podUID="06aa47e7-14c4-4c99-9d64-88ed0bae7c98" Mar 17 18:42:35.346189 env[1309]: time="2025-03-17T18:42:35.346122834Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:42:35.366303 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-450947b9bc2e2e827996f8d04d360bd8c560b64df91796f188454627c5fbd52a-rootfs.mount: Deactivated successfully. Mar 17 18:42:35.441073 kubelet[2216]: I0317 18:42:35.440985 2216 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 17 18:42:35.538196 env[1309]: time="2025-03-17T18:42:35.538144755Z" level=info msg="shim disconnected" id=450947b9bc2e2e827996f8d04d360bd8c560b64df91796f188454627c5fbd52a Mar 17 18:42:35.538196 env[1309]: time="2025-03-17T18:42:35.538197123Z" level=warning msg="cleaning up after shim disconnected" id=450947b9bc2e2e827996f8d04d360bd8c560b64df91796f188454627c5fbd52a namespace=k8s.io Mar 17 18:42:35.538196 env[1309]: time="2025-03-17T18:42:35.538206581Z" level=info msg="cleaning up dead shim" Mar 17 18:42:35.543789 env[1309]: time="2025-03-17T18:42:35.543756617Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:42:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2968 runtime=io.containerd.runc.v2\n" Mar 17 18:42:35.687177 kubelet[2216]: I0317 18:42:35.687134 2216 topology_manager.go:215] "Topology Admit Handler" podUID="182af1a9-314d-44fe-9729-65e5c19acc5a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-qvfwr" Mar 17 18:42:35.725886 kubelet[2216]: I0317 18:42:35.725581 2216 topology_manager.go:215] "Topology Admit Handler" podUID="e63620b8-dbf8-4096-bfe9-5ecbfa441daf" podNamespace="calico-apiserver" podName="calico-apiserver-9bd99c456-2chqm" Mar 17 18:42:35.725886 kubelet[2216]: I0317 18:42:35.725761 2216 topology_manager.go:215] "Topology Admit Handler" podUID="5a7c78d5-a1ee-4010-a9a6-a696fb4c1f04" podNamespace="kube-system" podName="coredns-7db6d8ff4d-k6vcp" Mar 17 18:42:35.726060 kubelet[2216]: I0317 18:42:35.725985 2216 topology_manager.go:215] "Topology Admit Handler" podUID="7b65dbfa-20f5-4d4f-8c03-972a550ae421" podNamespace="calico-system" podName="calico-kube-controllers-75cb744445-d9hht" Mar 17 18:42:35.726393 kubelet[2216]: I0317 18:42:35.726227 2216 topology_manager.go:215] "Topology Admit Handler" podUID="fe03e3e2-d761-40da-81d3-dd75baa1eeea" podNamespace="calico-apiserver" podName="calico-apiserver-9bd99c456-c2kg7" Mar 17 18:42:35.782422 kubelet[2216]: I0317 18:42:35.782372 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f72cs\" (UniqueName: \"kubernetes.io/projected/182af1a9-314d-44fe-9729-65e5c19acc5a-kube-api-access-f72cs\") pod \"coredns-7db6d8ff4d-qvfwr\" (UID: \"182af1a9-314d-44fe-9729-65e5c19acc5a\") " pod="kube-system/coredns-7db6d8ff4d-qvfwr" Mar 17 18:42:35.782422 kubelet[2216]: I0317 18:42:35.782419 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/182af1a9-314d-44fe-9729-65e5c19acc5a-config-volume\") pod \"coredns-7db6d8ff4d-qvfwr\" (UID: \"182af1a9-314d-44fe-9729-65e5c19acc5a\") " pod="kube-system/coredns-7db6d8ff4d-qvfwr" Mar 17 18:42:35.796172 kubelet[2216]: E0317 18:42:35.796143 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:35.796746 env[1309]: time="2025-03-17T18:42:35.796719424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Mar 17 18:42:35.883282 kubelet[2216]: I0317 18:42:35.883236 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5fqf\" (UniqueName: \"kubernetes.io/projected/5a7c78d5-a1ee-4010-a9a6-a696fb4c1f04-kube-api-access-l5fqf\") pod \"coredns-7db6d8ff4d-k6vcp\" (UID: \"5a7c78d5-a1ee-4010-a9a6-a696fb4c1f04\") " pod="kube-system/coredns-7db6d8ff4d-k6vcp" Mar 17 18:42:35.883282 kubelet[2216]: I0317 18:42:35.883281 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b65dbfa-20f5-4d4f-8c03-972a550ae421-tigera-ca-bundle\") pod \"calico-kube-controllers-75cb744445-d9hht\" (UID: \"7b65dbfa-20f5-4d4f-8c03-972a550ae421\") " pod="calico-system/calico-kube-controllers-75cb744445-d9hht" Mar 17 18:42:35.883463 kubelet[2216]: I0317 18:42:35.883326 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b774g\" (UniqueName: \"kubernetes.io/projected/fe03e3e2-d761-40da-81d3-dd75baa1eeea-kube-api-access-b774g\") pod \"calico-apiserver-9bd99c456-c2kg7\" (UID: \"fe03e3e2-d761-40da-81d3-dd75baa1eeea\") " pod="calico-apiserver/calico-apiserver-9bd99c456-c2kg7" Mar 17 18:42:35.883463 kubelet[2216]: I0317 18:42:35.883368 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wqrf\" (UniqueName: \"kubernetes.io/projected/e63620b8-dbf8-4096-bfe9-5ecbfa441daf-kube-api-access-8wqrf\") pod \"calico-apiserver-9bd99c456-2chqm\" (UID: \"e63620b8-dbf8-4096-bfe9-5ecbfa441daf\") " pod="calico-apiserver/calico-apiserver-9bd99c456-2chqm" Mar 17 18:42:35.883463 kubelet[2216]: I0317 18:42:35.883392 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fe03e3e2-d761-40da-81d3-dd75baa1eeea-calico-apiserver-certs\") pod \"calico-apiserver-9bd99c456-c2kg7\" (UID: \"fe03e3e2-d761-40da-81d3-dd75baa1eeea\") " pod="calico-apiserver/calico-apiserver-9bd99c456-c2kg7" Mar 17 18:42:35.883463 kubelet[2216]: I0317 18:42:35.883416 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e63620b8-dbf8-4096-bfe9-5ecbfa441daf-calico-apiserver-certs\") pod \"calico-apiserver-9bd99c456-2chqm\" (UID: \"e63620b8-dbf8-4096-bfe9-5ecbfa441daf\") " pod="calico-apiserver/calico-apiserver-9bd99c456-2chqm" Mar 17 18:42:35.883552 kubelet[2216]: I0317 18:42:35.883470 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a7c78d5-a1ee-4010-a9a6-a696fb4c1f04-config-volume\") pod \"coredns-7db6d8ff4d-k6vcp\" (UID: \"5a7c78d5-a1ee-4010-a9a6-a696fb4c1f04\") " pod="kube-system/coredns-7db6d8ff4d-k6vcp" Mar 17 18:42:35.883552 kubelet[2216]: I0317 18:42:35.883515 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbpfr\" (UniqueName: \"kubernetes.io/projected/7b65dbfa-20f5-4d4f-8c03-972a550ae421-kube-api-access-sbpfr\") pod \"calico-kube-controllers-75cb744445-d9hht\" (UID: \"7b65dbfa-20f5-4d4f-8c03-972a550ae421\") " pod="calico-system/calico-kube-controllers-75cb744445-d9hht" Mar 17 18:42:35.989304 kubelet[2216]: E0317 18:42:35.989219 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:35.990550 env[1309]: time="2025-03-17T18:42:35.989637647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qvfwr,Uid:182af1a9-314d-44fe-9729-65e5c19acc5a,Namespace:kube-system,Attempt:0,}" Mar 17 18:42:36.630011 env[1309]: time="2025-03-17T18:42:36.629968284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9bd99c456-2chqm,Uid:e63620b8-dbf8-4096-bfe9-5ecbfa441daf,Namespace:calico-apiserver,Attempt:0,}" Mar 17 18:42:36.632167 kubelet[2216]: E0317 18:42:36.632148 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:36.632475 env[1309]: time="2025-03-17T18:42:36.632448503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75cb744445-d9hht,Uid:7b65dbfa-20f5-4d4f-8c03-972a550ae421,Namespace:calico-system,Attempt:0,}" Mar 17 18:42:36.632837 env[1309]: time="2025-03-17T18:42:36.632798440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-k6vcp,Uid:5a7c78d5-a1ee-4010-a9a6-a696fb4c1f04,Namespace:kube-system,Attempt:0,}" Mar 17 18:42:36.632837 env[1309]: time="2025-03-17T18:42:36.632825120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9bd99c456-c2kg7,Uid:fe03e3e2-d761-40da-81d3-dd75baa1eeea,Namespace:calico-apiserver,Attempt:0,}" Mar 17 18:42:36.935587 env[1309]: time="2025-03-17T18:42:36.932042404Z" level=error msg="Failed to destroy network for sandbox \"f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:42:36.935587 env[1309]: time="2025-03-17T18:42:36.932406437Z" level=error msg="encountered an error cleaning up failed sandbox \"f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:42:36.935587 env[1309]: time="2025-03-17T18:42:36.932460308Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qvfwr,Uid:182af1a9-314d-44fe-9729-65e5c19acc5a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:42:36.934882 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af-shm.mount: Deactivated successfully. Mar 17 18:42:36.936083 kubelet[2216]: E0317 18:42:36.932721 2216 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:42:36.936083 kubelet[2216]: E0317 18:42:36.932808 2216 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-qvfwr" Mar 17 18:42:36.936083 kubelet[2216]: E0317 18:42:36.932833 2216 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-qvfwr" Mar 17 18:42:36.936365 kubelet[2216]: E0317 18:42:36.932898 2216 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-qvfwr_kube-system(182af1a9-314d-44fe-9729-65e5c19acc5a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-qvfwr_kube-system(182af1a9-314d-44fe-9729-65e5c19acc5a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-qvfwr" podUID="182af1a9-314d-44fe-9729-65e5c19acc5a" Mar 17 18:42:37.119902 env[1309]: time="2025-03-17T18:42:37.119838137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rwbh5,Uid:06aa47e7-14c4-4c99-9d64-88ed0bae7c98,Namespace:calico-system,Attempt:0,}" Mar 17 18:42:37.455733 env[1309]: time="2025-03-17T18:42:37.455670927Z" level=error msg="Failed to destroy network for sandbox \"2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:42:37.456351 env[1309]: time="2025-03-17T18:42:37.456317591Z" level=error msg="encountered an error cleaning up failed sandbox \"2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:42:37.456518 env[1309]: time="2025-03-17T18:42:37.456484215Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-k6vcp,Uid:5a7c78d5-a1ee-4010-a9a6-a696fb4c1f04,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:42:37.457270 kubelet[2216]: E0317 18:42:37.456881 2216 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:42:37.457270 kubelet[2216]: E0317 18:42:37.456947 2216 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-k6vcp" Mar 17 18:42:37.457270 kubelet[2216]: E0317 18:42:37.456973 2216 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-k6vcp" Mar 17 18:42:37.457439 kubelet[2216]: E0317 18:42:37.457023 2216 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-k6vcp_kube-system(5a7c78d5-a1ee-4010-a9a6-a696fb4c1f04)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-k6vcp_kube-system(5a7c78d5-a1ee-4010-a9a6-a696fb4c1f04)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-k6vcp" podUID="5a7c78d5-a1ee-4010-a9a6-a696fb4c1f04" Mar 17 18:42:37.458355 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035-shm.mount: Deactivated successfully. Mar 17 18:42:37.480503 env[1309]: time="2025-03-17T18:42:37.480443034Z" level=error msg="Failed to destroy network for sandbox \"f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:42:37.486246 env[1309]: time="2025-03-17T18:42:37.480786229Z" level=error msg="encountered an error cleaning up failed sandbox \"f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:42:37.486246 env[1309]: time="2025-03-17T18:42:37.480823669Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75cb744445-d9hht,Uid:7b65dbfa-20f5-4d4f-8c03-972a550ae421,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:42:37.486246 env[1309]: time="2025-03-17T18:42:37.483006409Z" level=error msg="Failed to destroy network for sandbox \"a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:42:37.482741 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9-shm.mount: Deactivated successfully. Mar 17 18:42:37.486452 kubelet[2216]: E0317 18:42:37.481046 2216 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:42:37.486452 kubelet[2216]: E0317 18:42:37.481098 2216 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75cb744445-d9hht" Mar 17 18:42:37.486452 kubelet[2216]: E0317 18:42:37.481117 2216 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75cb744445-d9hht" Mar 17 18:42:37.486542 env[1309]: time="2025-03-17T18:42:37.486410452Z" level=error msg="encountered an error cleaning up failed sandbox \"a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:42:37.486542 env[1309]: time="2025-03-17T18:42:37.486474022Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rwbh5,Uid:06aa47e7-14c4-4c99-9d64-88ed0bae7c98,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:42:37.485156 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb-shm.mount: Deactivated successfully. Mar 17 18:42:37.486643 kubelet[2216]: E0317 18:42:37.481153 2216 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-75cb744445-d9hht_calico-system(7b65dbfa-20f5-4d4f-8c03-972a550ae421)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-75cb744445-d9hht_calico-system(7b65dbfa-20f5-4d4f-8c03-972a550ae421)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75cb744445-d9hht" podUID="7b65dbfa-20f5-4d4f-8c03-972a550ae421" Mar 17 18:42:37.487192 kubelet[2216]: E0317 18:42:37.486702 2216 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:42:37.487192 kubelet[2216]: E0317 18:42:37.486764 2216 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rwbh5" Mar 17 18:42:37.487192 kubelet[2216]: E0317 18:42:37.486787 2216 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rwbh5" Mar 17 18:42:37.487290 kubelet[2216]: E0317 18:42:37.486838 2216 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rwbh5_calico-system(06aa47e7-14c4-4c99-9d64-88ed0bae7c98)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rwbh5_calico-system(06aa47e7-14c4-4c99-9d64-88ed0bae7c98)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rwbh5" podUID="06aa47e7-14c4-4c99-9d64-88ed0bae7c98" Mar 17 18:42:37.490720 env[1309]: time="2025-03-17T18:42:37.490664923Z" level=error msg="Failed to destroy network for sandbox \"2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:42:37.494232 env[1309]: time="2025-03-17T18:42:37.493792978Z" level=error msg="encountered an error cleaning up failed sandbox \"2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:42:37.494232 env[1309]: time="2025-03-17T18:42:37.493880002Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9bd99c456-2chqm,Uid:e63620b8-dbf8-4096-bfe9-5ecbfa441daf,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:42:37.494357 kubelet[2216]: E0317 18:42:37.494115 2216 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:42:37.494357 kubelet[2216]: E0317 18:42:37.494174 2216 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9bd99c456-2chqm" Mar 17 18:42:37.494357 kubelet[2216]: E0317 18:42:37.494193 2216 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9bd99c456-2chqm" Mar 17 18:42:37.492985 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7-shm.mount: Deactivated successfully. Mar 17 18:42:37.494562 kubelet[2216]: E0317 18:42:37.494234 2216 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9bd99c456-2chqm_calico-apiserver(e63620b8-dbf8-4096-bfe9-5ecbfa441daf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9bd99c456-2chqm_calico-apiserver(e63620b8-dbf8-4096-bfe9-5ecbfa441daf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9bd99c456-2chqm" podUID="e63620b8-dbf8-4096-bfe9-5ecbfa441daf" Mar 17 18:42:37.501204 env[1309]: time="2025-03-17T18:42:37.501121933Z" level=error msg="Failed to destroy network for sandbox \"92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:42:37.501602 env[1309]: time="2025-03-17T18:42:37.501566598Z" level=error msg="encountered an error cleaning up failed sandbox \"92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:42:37.501668 env[1309]: time="2025-03-17T18:42:37.501624717Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9bd99c456-c2kg7,Uid:fe03e3e2-d761-40da-81d3-dd75baa1eeea,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:42:37.501899 kubelet[2216]: E0317 18:42:37.501845 2216 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:42:37.501969 kubelet[2216]: E0317 18:42:37.501913 2216 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9bd99c456-c2kg7" Mar 17 18:42:37.501969 kubelet[2216]: E0317 18:42:37.501931 2216 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9bd99c456-c2kg7" Mar 17 18:42:37.502032 kubelet[2216]: E0317 18:42:37.501968 2216 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9bd99c456-c2kg7_calico-apiserver(fe03e3e2-d761-40da-81d3-dd75baa1eeea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9bd99c456-c2kg7_calico-apiserver(fe03e3e2-d761-40da-81d3-dd75baa1eeea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9bd99c456-c2kg7" podUID="fe03e3e2-d761-40da-81d3-dd75baa1eeea" Mar 17 18:42:37.799930 kubelet[2216]: I0317 18:42:37.799802 2216 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3" Mar 17 18:42:37.801155 env[1309]: time="2025-03-17T18:42:37.801123816Z" level=info msg="StopPodSandbox for \"92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3\"" Mar 17 18:42:37.801527 kubelet[2216]: I0317 18:42:37.801495 2216 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb" Mar 17 18:42:37.802092 env[1309]: time="2025-03-17T18:42:37.802060325Z" level=info msg="StopPodSandbox for \"a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb\"" Mar 17 18:42:37.802823 kubelet[2216]: I0317 18:42:37.802790 2216 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7" Mar 17 18:42:37.803303 env[1309]: time="2025-03-17T18:42:37.803277080Z" level=info msg="StopPodSandbox for \"2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7\"" Mar 17 18:42:37.804636 kubelet[2216]: I0317 18:42:37.804187 2216 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9" Mar 17 18:42:37.804737 env[1309]: time="2025-03-17T18:42:37.804681969Z" level=info msg="StopPodSandbox for \"f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9\"" Mar 17 18:42:37.806507 kubelet[2216]: I0317 18:42:37.806098 2216 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af" Mar 17 18:42:37.806726 env[1309]: time="2025-03-17T18:42:37.806702143Z" level=info msg="StopPodSandbox for \"f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af\"" Mar 17 18:42:37.807064 kubelet[2216]: I0317 18:42:37.807043 2216 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035" Mar 17 18:42:37.807684 env[1309]: time="2025-03-17T18:42:37.807618785Z" level=info msg="StopPodSandbox for \"2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035\"" Mar 17 18:42:37.840643 env[1309]: time="2025-03-17T18:42:37.840579252Z" level=error msg="StopPodSandbox for \"2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7\" failed" error="failed to destroy network for sandbox \"2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:42:37.841106 kubelet[2216]: E0317 18:42:37.841041 2216 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7" Mar 17 18:42:37.841186 kubelet[2216]: E0317 18:42:37.841116 2216 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7"} Mar 17 18:42:37.841230 kubelet[2216]: E0317 18:42:37.841200 2216 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e63620b8-dbf8-4096-bfe9-5ecbfa441daf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 17 18:42:37.841319 kubelet[2216]: E0317 18:42:37.841231 2216 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e63620b8-dbf8-4096-bfe9-5ecbfa441daf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9bd99c456-2chqm" podUID="e63620b8-dbf8-4096-bfe9-5ecbfa441daf" Mar 17 18:42:37.860707 env[1309]: time="2025-03-17T18:42:37.860609412Z" level=error msg="StopPodSandbox for \"a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb\" failed" error="failed to destroy network for sandbox \"a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:42:37.860924 kubelet[2216]: E0317 18:42:37.860875 2216 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb" Mar 17 18:42:37.860998 kubelet[2216]: E0317 18:42:37.860921 2216 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb"} Mar 17 18:42:37.860998 kubelet[2216]: E0317 18:42:37.860956 2216 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"06aa47e7-14c4-4c99-9d64-88ed0bae7c98\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 17 18:42:37.860998 kubelet[2216]: E0317 18:42:37.860977 2216 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"06aa47e7-14c4-4c99-9d64-88ed0bae7c98\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rwbh5" podUID="06aa47e7-14c4-4c99-9d64-88ed0bae7c98" Mar 17 18:42:37.878559 env[1309]: time="2025-03-17T18:42:37.878485316Z" level=error msg="StopPodSandbox for \"f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af\" failed" error="failed to destroy network for sandbox \"f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:42:37.878807 kubelet[2216]: E0317 18:42:37.878761 2216 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af" Mar 17 18:42:37.878903 kubelet[2216]: E0317 18:42:37.878823 2216 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af"} Mar 17 18:42:37.878903 kubelet[2216]: E0317 18:42:37.878890 2216 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"182af1a9-314d-44fe-9729-65e5c19acc5a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 17 18:42:37.879030 kubelet[2216]: E0317 18:42:37.878920 2216 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"182af1a9-314d-44fe-9729-65e5c19acc5a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-qvfwr" podUID="182af1a9-314d-44fe-9729-65e5c19acc5a" Mar 17 18:42:37.879515 env[1309]: time="2025-03-17T18:42:37.879474945Z" level=error msg="StopPodSandbox for \"2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035\" failed" error="failed to destroy network for sandbox \"2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:42:37.879644 kubelet[2216]: E0317 18:42:37.879615 2216 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035" Mar 17 18:42:37.879696 kubelet[2216]: E0317 18:42:37.879649 2216 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035"} Mar 17 18:42:37.879735 kubelet[2216]: E0317 18:42:37.879687 2216 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5a7c78d5-a1ee-4010-a9a6-a696fb4c1f04\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 17 18:42:37.879735 kubelet[2216]: E0317 18:42:37.879719 2216 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5a7c78d5-a1ee-4010-a9a6-a696fb4c1f04\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-k6vcp" podUID="5a7c78d5-a1ee-4010-a9a6-a696fb4c1f04" Mar 17 18:42:37.879952 env[1309]: time="2025-03-17T18:42:37.879920232Z" level=error msg="StopPodSandbox for \"92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3\" failed" error="failed to destroy network for sandbox \"92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:42:37.880050 kubelet[2216]: E0317 18:42:37.880028 2216 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3" Mar 17 18:42:37.880102 kubelet[2216]: E0317 18:42:37.880055 2216 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3"} Mar 17 18:42:37.880102 kubelet[2216]: E0317 18:42:37.880078 2216 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fe03e3e2-d761-40da-81d3-dd75baa1eeea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 17 18:42:37.880214 kubelet[2216]: E0317 18:42:37.880103 2216 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fe03e3e2-d761-40da-81d3-dd75baa1eeea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9bd99c456-c2kg7" podUID="fe03e3e2-d761-40da-81d3-dd75baa1eeea" Mar 17 18:42:37.882119 env[1309]: time="2025-03-17T18:42:37.882055713Z" level=error msg="StopPodSandbox for \"f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9\" failed" error="failed to destroy network for sandbox \"f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:42:37.882308 kubelet[2216]: E0317 18:42:37.882270 2216 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9" Mar 17 18:42:37.882371 kubelet[2216]: E0317 18:42:37.882319 2216 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9"} Mar 17 18:42:37.882371 kubelet[2216]: E0317 18:42:37.882356 2216 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7b65dbfa-20f5-4d4f-8c03-972a550ae421\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 17 18:42:37.882505 kubelet[2216]: E0317 18:42:37.882388 2216 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7b65dbfa-20f5-4d4f-8c03-972a550ae421\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75cb744445-d9hht" podUID="7b65dbfa-20f5-4d4f-8c03-972a550ae421" Mar 17 18:42:37.924986 systemd[1]: Started sshd@8-10.0.0.81:22-10.0.0.1:46008.service. Mar 17 18:42:37.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.81:22-10.0.0.1:46008 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:37.926879 kernel: kauditd_printk_skb: 1 callbacks suppressed Mar 17 18:42:37.926936 kernel: audit: type=1130 audit(1742236957.924:281): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.81:22-10.0.0.1:46008 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:37.960000 audit[3370]: USER_ACCT pid=3370 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:37.969071 kernel: audit: type=1101 audit(1742236957.960:282): pid=3370 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:37.969107 sshd[3370]: Accepted publickey for core from 10.0.0.1 port 46008 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:42:37.968000 audit[3370]: CRED_ACQ pid=3370 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:37.969946 sshd[3370]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:42:37.978892 kernel: audit: type=1103 audit(1742236957.968:283): pid=3370 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:37.979048 kernel: audit: type=1006 audit(1742236957.968:284): pid=3370 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Mar 17 18:42:37.983905 systemd[1]: Started session-9.scope. Mar 17 18:42:37.968000 audit[3370]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe00dd0e0 a2=3 a3=0 items=0 ppid=1 pid=3370 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:37.985030 systemd-logind[1293]: New session 9 of user core. Mar 17 18:42:37.992152 kernel: audit: type=1300 audit(1742236957.968:284): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe00dd0e0 a2=3 a3=0 items=0 ppid=1 pid=3370 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:37.992200 kernel: audit: type=1327 audit(1742236957.968:284): proctitle=737368643A20636F7265205B707269765D Mar 17 18:42:37.968000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:42:37.992000 audit[3370]: USER_START pid=3370 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:38.002379 kernel: audit: type=1105 audit(1742236957.992:285): pid=3370 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:38.002516 kernel: audit: type=1103 audit(1742236957.993:286): pid=3373 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:37.993000 audit[3373]: CRED_ACQ pid=3373 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:38.125428 sshd[3370]: pam_unix(sshd:session): session closed for user core Mar 17 18:42:38.126000 audit[3370]: USER_END pid=3370 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:38.127000 audit[3370]: CRED_DISP pid=3370 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:38.129882 systemd[1]: sshd@8-10.0.0.81:22-10.0.0.1:46008.service: Deactivated successfully. Mar 17 18:42:38.130736 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 18:42:38.131421 systemd-logind[1293]: Session 9 logged out. Waiting for processes to exit. Mar 17 18:42:38.133431 systemd-logind[1293]: Removed session 9. Mar 17 18:42:38.141109 kernel: audit: type=1106 audit(1742236958.126:287): pid=3370 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:38.141183 kernel: audit: type=1104 audit(1742236958.127:288): pid=3370 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:38.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.81:22-10.0.0.1:46008 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:38.414014 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3-shm.mount: Deactivated successfully. Mar 17 18:42:42.957151 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1094240863.mount: Deactivated successfully. Mar 17 18:42:43.128787 systemd[1]: Started sshd@9-10.0.0.81:22-10.0.0.1:46018.service. Mar 17 18:42:43.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.81:22-10.0.0.1:46018 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:43.152362 kernel: kauditd_printk_skb: 1 callbacks suppressed Mar 17 18:42:43.152466 kernel: audit: type=1130 audit(1742236963.128:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.81:22-10.0.0.1:46018 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:43.277472 kubelet[2216]: I0317 18:42:43.277357 2216 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 18:42:43.278133 kubelet[2216]: E0317 18:42:43.278106 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:44.694719 kubelet[2216]: E0317 18:42:44.694683 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:44.718673 env[1309]: time="2025-03-17T18:42:44.718548041Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:44.721175 env[1309]: time="2025-03-17T18:42:44.721141709Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:44.724477 env[1309]: time="2025-03-17T18:42:44.724435882Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:44.726313 env[1309]: time="2025-03-17T18:42:44.726270285Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:44.726586 env[1309]: time="2025-03-17T18:42:44.726558206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Mar 17 18:42:44.729000 audit[3394]: NETFILTER_CFG table=filter:95 family=2 entries=17 op=nft_register_rule pid=3394 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:42:44.731572 sshd[3390]: Accepted publickey for core from 10.0.0.1 port 46018 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:42:44.729000 audit[3394]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffc604cf630 a2=0 a3=7ffc604cf61c items=0 ppid=2420 pid=3394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:44.733999 sshd[3390]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:42:44.739145 kernel: audit: type=1325 audit(1742236964.729:291): table=filter:95 family=2 entries=17 op=nft_register_rule pid=3394 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:42:44.739246 kernel: audit: type=1300 audit(1742236964.729:291): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffc604cf630 a2=0 a3=7ffc604cf61c items=0 ppid=2420 pid=3394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:44.739279 kernel: audit: type=1327 audit(1742236964.729:291): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:42:44.729000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:42:44.741676 env[1309]: time="2025-03-17T18:42:44.741631459Z" level=info msg="CreateContainer within sandbox \"546d66e230280017c9e83f127fea57d74b165ca11c271d28d9e919c656ba2223\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 17 18:42:44.742143 kernel: audit: type=1101 audit(1742236964.730:292): pid=3390 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:44.730000 audit[3390]: USER_ACCT pid=3390 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:44.744154 systemd-logind[1293]: New session 10 of user core. Mar 17 18:42:44.745417 systemd[1]: Started session-10.scope. Mar 17 18:42:44.747381 kernel: audit: type=1103 audit(1742236964.732:293): pid=3390 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:44.732000 audit[3390]: CRED_ACQ pid=3390 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:44.752266 kernel: audit: type=1006 audit(1742236964.732:294): pid=3390 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Mar 17 18:42:44.732000 audit[3390]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff48cef9b0 a2=3 a3=0 items=0 ppid=1 pid=3390 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:44.759427 kernel: audit: type=1300 audit(1742236964.732:294): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff48cef9b0 a2=3 a3=0 items=0 ppid=1 pid=3390 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:44.759470 kernel: audit: type=1327 audit(1742236964.732:294): proctitle=737368643A20636F7265205B707269765D Mar 17 18:42:44.732000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:42:44.750000 audit[3390]: USER_START pid=3390 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:44.765961 env[1309]: time="2025-03-17T18:42:44.765915871Z" level=info msg="CreateContainer within sandbox \"546d66e230280017c9e83f127fea57d74b165ca11c271d28d9e919c656ba2223\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"838475d87e33dc8002693fe2b57173278affa68b010fdd370ab2c2b41b864d00\"" Mar 17 18:42:44.766284 kernel: audit: type=1105 audit(1742236964.750:295): pid=3390 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:44.754000 audit[3397]: CRED_ACQ pid=3397 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:44.754000 audit[3394]: NETFILTER_CFG table=nat:96 family=2 entries=19 op=nft_register_chain pid=3394 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:42:44.754000 audit[3394]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc604cf630 a2=0 a3=7ffc604cf61c items=0 ppid=2420 pid=3394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:44.754000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:42:44.766468 env[1309]: time="2025-03-17T18:42:44.766391945Z" level=info msg="StartContainer for \"838475d87e33dc8002693fe2b57173278affa68b010fdd370ab2c2b41b864d00\"" Mar 17 18:42:44.912663 env[1309]: time="2025-03-17T18:42:44.912570156Z" level=info msg="StartContainer for \"838475d87e33dc8002693fe2b57173278affa68b010fdd370ab2c2b41b864d00\" returns successfully" Mar 17 18:42:44.958107 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Mar 17 18:42:44.958264 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Mar 17 18:42:44.960130 sshd[3390]: pam_unix(sshd:session): session closed for user core Mar 17 18:42:44.960000 audit[3390]: USER_END pid=3390 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:44.960000 audit[3390]: CRED_DISP pid=3390 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:44.962929 systemd[1]: sshd@9-10.0.0.81:22-10.0.0.1:46018.service: Deactivated successfully. Mar 17 18:42:44.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.81:22-10.0.0.1:46018 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:44.963831 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 18:42:44.966274 systemd-logind[1293]: Session 10 logged out. Waiting for processes to exit. Mar 17 18:42:44.967157 systemd-logind[1293]: Removed session 10. Mar 17 18:42:45.064277 env[1309]: time="2025-03-17T18:42:45.064213198Z" level=info msg="shim disconnected" id=838475d87e33dc8002693fe2b57173278affa68b010fdd370ab2c2b41b864d00 Mar 17 18:42:45.064277 env[1309]: time="2025-03-17T18:42:45.064274223Z" level=warning msg="cleaning up after shim disconnected" id=838475d87e33dc8002693fe2b57173278affa68b010fdd370ab2c2b41b864d00 namespace=k8s.io Mar 17 18:42:45.064277 env[1309]: time="2025-03-17T18:42:45.064284843Z" level=info msg="cleaning up dead shim" Mar 17 18:42:45.070353 env[1309]: time="2025-03-17T18:42:45.070306484Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:42:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3472 runtime=io.containerd.runc.v2\n" Mar 17 18:42:45.733035 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-838475d87e33dc8002693fe2b57173278affa68b010fdd370ab2c2b41b864d00-rootfs.mount: Deactivated successfully. Mar 17 18:42:45.822088 kubelet[2216]: I0317 18:42:45.822063 2216 scope.go:117] "RemoveContainer" containerID="838475d87e33dc8002693fe2b57173278affa68b010fdd370ab2c2b41b864d00" Mar 17 18:42:45.822501 kubelet[2216]: E0317 18:42:45.822130 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:45.824611 env[1309]: time="2025-03-17T18:42:45.824558727Z" level=info msg="CreateContainer within sandbox \"546d66e230280017c9e83f127fea57d74b165ca11c271d28d9e919c656ba2223\" for container &ContainerMetadata{Name:calico-node,Attempt:1,}" Mar 17 18:42:45.840284 env[1309]: time="2025-03-17T18:42:45.840236563Z" level=info msg="CreateContainer within sandbox \"546d66e230280017c9e83f127fea57d74b165ca11c271d28d9e919c656ba2223\" for &ContainerMetadata{Name:calico-node,Attempt:1,} returns container id \"00afc36359e61d6649e411fe8495999ad9b7590e53cc1bfde17f4bc3bae681fa\"" Mar 17 18:42:45.840948 env[1309]: time="2025-03-17T18:42:45.840922941Z" level=info msg="StartContainer for \"00afc36359e61d6649e411fe8495999ad9b7590e53cc1bfde17f4bc3bae681fa\"" Mar 17 18:42:45.890484 env[1309]: time="2025-03-17T18:42:45.890424742Z" level=info msg="StartContainer for \"00afc36359e61d6649e411fe8495999ad9b7590e53cc1bfde17f4bc3bae681fa\" returns successfully" Mar 17 18:42:45.958900 env[1309]: time="2025-03-17T18:42:45.958823771Z" level=info msg="shim disconnected" id=00afc36359e61d6649e411fe8495999ad9b7590e53cc1bfde17f4bc3bae681fa Mar 17 18:42:45.958900 env[1309]: time="2025-03-17T18:42:45.958898361Z" level=warning msg="cleaning up after shim disconnected" id=00afc36359e61d6649e411fe8495999ad9b7590e53cc1bfde17f4bc3bae681fa namespace=k8s.io Mar 17 18:42:45.959109 env[1309]: time="2025-03-17T18:42:45.958910964Z" level=info msg="cleaning up dead shim" Mar 17 18:42:45.965068 env[1309]: time="2025-03-17T18:42:45.965020801Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:42:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3536 runtime=io.containerd.runc.v2\n" Mar 17 18:42:46.733353 systemd[1]: run-containerd-runc-k8s.io-00afc36359e61d6649e411fe8495999ad9b7590e53cc1bfde17f4bc3bae681fa-runc.Zu65nK.mount: Deactivated successfully. Mar 17 18:42:46.733482 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-00afc36359e61d6649e411fe8495999ad9b7590e53cc1bfde17f4bc3bae681fa-rootfs.mount: Deactivated successfully. Mar 17 18:42:46.825656 kubelet[2216]: I0317 18:42:46.825614 2216 scope.go:117] "RemoveContainer" containerID="838475d87e33dc8002693fe2b57173278affa68b010fdd370ab2c2b41b864d00" Mar 17 18:42:46.826135 kubelet[2216]: I0317 18:42:46.826034 2216 scope.go:117] "RemoveContainer" containerID="00afc36359e61d6649e411fe8495999ad9b7590e53cc1bfde17f4bc3bae681fa" Mar 17 18:42:46.826135 kubelet[2216]: E0317 18:42:46.826098 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:42:46.826742 kubelet[2216]: E0317 18:42:46.826602 2216 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-lr4b4_calico-system(8fbc69a3-8524-4440-a2d9-778d5b73de1b)\"" pod="calico-system/calico-node-lr4b4" podUID="8fbc69a3-8524-4440-a2d9-778d5b73de1b" Mar 17 18:42:46.827596 env[1309]: time="2025-03-17T18:42:46.827433132Z" level=info msg="RemoveContainer for \"838475d87e33dc8002693fe2b57173278affa68b010fdd370ab2c2b41b864d00\"" Mar 17 18:42:46.882104 env[1309]: time="2025-03-17T18:42:46.882064113Z" level=info msg="RemoveContainer for \"838475d87e33dc8002693fe2b57173278affa68b010fdd370ab2c2b41b864d00\" returns successfully" Mar 17 18:42:48.117767 env[1309]: time="2025-03-17T18:42:48.117708911Z" level=info msg="StopPodSandbox for \"a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb\"" Mar 17 18:42:48.142197 env[1309]: time="2025-03-17T18:42:48.142116052Z" level=error msg="StopPodSandbox for \"a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb\" failed" error="failed to destroy network for sandbox \"a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:42:48.142470 kubelet[2216]: E0317 18:42:48.142409 2216 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb" Mar 17 18:42:48.142735 kubelet[2216]: E0317 18:42:48.142467 2216 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb"} Mar 17 18:42:48.142735 kubelet[2216]: E0317 18:42:48.142503 2216 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"06aa47e7-14c4-4c99-9d64-88ed0bae7c98\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 17 18:42:48.142735 kubelet[2216]: E0317 18:42:48.142525 2216 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"06aa47e7-14c4-4c99-9d64-88ed0bae7c98\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rwbh5" podUID="06aa47e7-14c4-4c99-9d64-88ed0bae7c98" Mar 17 18:42:49.963311 systemd[1]: Started sshd@10-10.0.0.81:22-10.0.0.1:44092.service. Mar 17 18:42:49.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.81:22-10.0.0.1:44092 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:49.964548 kernel: kauditd_printk_skb: 7 callbacks suppressed Mar 17 18:42:49.964610 kernel: audit: type=1130 audit(1742236969.962:301): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.81:22-10.0.0.1:44092 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:49.992000 audit[3571]: USER_ACCT pid=3571 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:49.993938 sshd[3571]: Accepted publickey for core from 10.0.0.1 port 44092 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:42:49.997401 sshd[3571]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:42:49.996000 audit[3571]: CRED_ACQ pid=3571 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:50.001173 systemd-logind[1293]: New session 11 of user core. Mar 17 18:42:50.001834 kernel: audit: type=1101 audit(1742236969.992:302): pid=3571 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:50.001900 kernel: audit: type=1103 audit(1742236969.996:303): pid=3571 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:50.001921 kernel: audit: type=1006 audit(1742236969.996:304): pid=3571 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Mar 17 18:42:50.002169 systemd[1]: Started session-11.scope. Mar 17 18:42:50.004088 kernel: audit: type=1300 audit(1742236969.996:304): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe91da4830 a2=3 a3=0 items=0 ppid=1 pid=3571 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:49.996000 audit[3571]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe91da4830 a2=3 a3=0 items=0 ppid=1 pid=3571 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:49.996000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:42:50.009488 kernel: audit: type=1327 audit(1742236969.996:304): proctitle=737368643A20636F7265205B707269765D Mar 17 18:42:50.009533 kernel: audit: type=1105 audit(1742236970.006:305): pid=3571 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:50.006000 audit[3571]: USER_START pid=3571 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:50.013773 kernel: audit: type=1103 audit(1742236970.007:306): pid=3574 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:50.007000 audit[3574]: CRED_ACQ pid=3574 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:50.106011 sshd[3571]: pam_unix(sshd:session): session closed for user core Mar 17 18:42:50.108535 systemd[1]: Started sshd@11-10.0.0.81:22-10.0.0.1:44098.service. Mar 17 18:42:50.108974 systemd[1]: sshd@10-10.0.0.81:22-10.0.0.1:44092.service: Deactivated successfully. Mar 17 18:42:50.106000 audit[3571]: USER_END pid=3571 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:50.110524 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 18:42:50.117552 kernel: audit: type=1106 audit(1742236970.106:307): pid=3571 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:50.117669 kernel: audit: type=1104 audit(1742236970.106:308): pid=3571 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:50.106000 audit[3571]: CRED_DISP pid=3571 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:50.114015 systemd-logind[1293]: Session 11 logged out. Waiting for processes to exit. Mar 17 18:42:50.114790 systemd-logind[1293]: Removed session 11. Mar 17 18:42:50.117916 env[1309]: time="2025-03-17T18:42:50.117736038Z" level=info msg="StopPodSandbox for \"f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af\"" Mar 17 18:42:50.118132 env[1309]: time="2025-03-17T18:42:50.118085643Z" level=info msg="StopPodSandbox for \"2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7\"" Mar 17 18:42:50.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.81:22-10.0.0.1:44098 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:50.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.81:22-10.0.0.1:44092 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:50.140000 audit[3586]: USER_ACCT pid=3586 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:50.141197 sshd[3586]: Accepted publickey for core from 10.0.0.1 port 44098 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:42:50.141000 audit[3586]: CRED_ACQ pid=3586 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:50.141000 audit[3586]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffef6282e00 a2=3 a3=0 items=0 ppid=1 pid=3586 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:50.141000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:42:50.142549 sshd[3586]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:42:50.145239 env[1309]: time="2025-03-17T18:42:50.145187998Z" level=error msg="StopPodSandbox for \"f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af\" failed" error="failed to destroy network for sandbox \"f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:42:50.145443 kubelet[2216]: E0317 18:42:50.145385 2216 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af" Mar 17 18:42:50.145703 kubelet[2216]: E0317 18:42:50.145440 2216 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af"} Mar 17 18:42:50.145703 kubelet[2216]: E0317 18:42:50.145469 2216 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"182af1a9-314d-44fe-9729-65e5c19acc5a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 17 18:42:50.145703 kubelet[2216]: E0317 18:42:50.145490 2216 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"182af1a9-314d-44fe-9729-65e5c19acc5a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-qvfwr" podUID="182af1a9-314d-44fe-9729-65e5c19acc5a" Mar 17 18:42:50.146322 systemd-logind[1293]: New session 12 of user core. Mar 17 18:42:50.147182 systemd[1]: Started session-12.scope. Mar 17 18:42:50.149962 env[1309]: time="2025-03-17T18:42:50.149897865Z" level=error msg="StopPodSandbox for \"2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7\" failed" error="failed to destroy network for sandbox \"2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:42:50.150431 kubelet[2216]: E0317 18:42:50.150281 2216 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7" Mar 17 18:42:50.150431 kubelet[2216]: E0317 18:42:50.150330 2216 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7"} Mar 17 18:42:50.150431 kubelet[2216]: E0317 18:42:50.150362 2216 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e63620b8-dbf8-4096-bfe9-5ecbfa441daf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 17 18:42:50.150431 kubelet[2216]: E0317 18:42:50.150390 2216 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e63620b8-dbf8-4096-bfe9-5ecbfa441daf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9bd99c456-2chqm" podUID="e63620b8-dbf8-4096-bfe9-5ecbfa441daf" Mar 17 18:42:50.150000 audit[3586]: USER_START pid=3586 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:50.151000 audit[3635]: CRED_ACQ pid=3635 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:50.273995 sshd[3586]: pam_unix(sshd:session): session closed for user core Mar 17 18:42:50.274000 audit[3586]: USER_END pid=3586 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:50.274000 audit[3586]: CRED_DISP pid=3586 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:50.276636 systemd[1]: Started sshd@12-10.0.0.81:22-10.0.0.1:44108.service. Mar 17 18:42:50.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.81:22-10.0.0.1:44108 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:50.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.81:22-10.0.0.1:44098 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:50.277144 systemd[1]: sshd@11-10.0.0.81:22-10.0.0.1:44098.service: Deactivated successfully. Mar 17 18:42:50.278365 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 18:42:50.278924 systemd-logind[1293]: Session 12 logged out. Waiting for processes to exit. Mar 17 18:42:50.279784 systemd-logind[1293]: Removed session 12. Mar 17 18:42:50.309000 audit[3642]: USER_ACCT pid=3642 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:50.310551 sshd[3642]: Accepted publickey for core from 10.0.0.1 port 44108 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:42:50.310000 audit[3642]: CRED_ACQ pid=3642 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:50.310000 audit[3642]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdbef3fd40 a2=3 a3=0 items=0 ppid=1 pid=3642 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:50.310000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:42:50.311506 sshd[3642]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:42:50.314504 systemd-logind[1293]: New session 13 of user core. Mar 17 18:42:50.315220 systemd[1]: Started session-13.scope. Mar 17 18:42:50.317000 audit[3642]: USER_START pid=3642 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:50.318000 audit[3647]: CRED_ACQ pid=3647 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:50.417706 sshd[3642]: pam_unix(sshd:session): session closed for user core Mar 17 18:42:50.417000 audit[3642]: USER_END pid=3642 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:50.417000 audit[3642]: CRED_DISP pid=3642 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:50.419961 systemd[1]: sshd@12-10.0.0.81:22-10.0.0.1:44108.service: Deactivated successfully. Mar 17 18:42:50.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.81:22-10.0.0.1:44108 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:50.420793 systemd-logind[1293]: Session 13 logged out. Waiting for processes to exit. Mar 17 18:42:50.420846 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 18:42:50.421479 systemd-logind[1293]: Removed session 13. Mar 17 18:42:51.117308 env[1309]: time="2025-03-17T18:42:51.117255398Z" level=info msg="StopPodSandbox for \"2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035\"" Mar 17 18:42:51.118040 env[1309]: time="2025-03-17T18:42:51.118008370Z" level=info msg="StopPodSandbox for \"f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9\"" Mar 17 18:42:51.141258 env[1309]: time="2025-03-17T18:42:51.141202229Z" level=error msg="StopPodSandbox for \"2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035\" failed" error="failed to destroy network for sandbox \"2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:42:51.141676 kubelet[2216]: E0317 18:42:51.141619 2216 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035" Mar 17 18:42:51.141739 kubelet[2216]: E0317 18:42:51.141682 2216 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035"} Mar 17 18:42:51.141739 kubelet[2216]: E0317 18:42:51.141714 2216 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5a7c78d5-a1ee-4010-a9a6-a696fb4c1f04\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 17 18:42:51.141849 kubelet[2216]: E0317 18:42:51.141746 2216 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5a7c78d5-a1ee-4010-a9a6-a696fb4c1f04\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-k6vcp" podUID="5a7c78d5-a1ee-4010-a9a6-a696fb4c1f04" Mar 17 18:42:51.143611 env[1309]: time="2025-03-17T18:42:51.143562818Z" level=error msg="StopPodSandbox for \"f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9\" failed" error="failed to destroy network for sandbox \"f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:42:51.143777 kubelet[2216]: E0317 18:42:51.143713 2216 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9" Mar 17 18:42:51.143777 kubelet[2216]: E0317 18:42:51.143764 2216 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9"} Mar 17 18:42:51.143848 kubelet[2216]: E0317 18:42:51.143803 2216 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7b65dbfa-20f5-4d4f-8c03-972a550ae421\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 17 18:42:51.143930 kubelet[2216]: E0317 18:42:51.143835 2216 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7b65dbfa-20f5-4d4f-8c03-972a550ae421\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75cb744445-d9hht" podUID="7b65dbfa-20f5-4d4f-8c03-972a550ae421" Mar 17 18:42:52.116798 env[1309]: time="2025-03-17T18:42:52.116726241Z" level=info msg="StopPodSandbox for \"92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3\"" Mar 17 18:42:52.142920 env[1309]: time="2025-03-17T18:42:52.142836960Z" level=error msg="StopPodSandbox for \"92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3\" failed" error="failed to destroy network for sandbox \"92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:42:52.143267 kubelet[2216]: E0317 18:42:52.143091 2216 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3" Mar 17 18:42:52.143267 kubelet[2216]: E0317 18:42:52.143139 2216 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3"} Mar 17 18:42:52.143267 kubelet[2216]: E0317 18:42:52.143169 2216 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fe03e3e2-d761-40da-81d3-dd75baa1eeea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 17 18:42:52.143267 kubelet[2216]: E0317 18:42:52.143190 2216 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fe03e3e2-d761-40da-81d3-dd75baa1eeea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9bd99c456-c2kg7" podUID="fe03e3e2-d761-40da-81d3-dd75baa1eeea" Mar 17 18:42:55.420761 systemd[1]: Started sshd@13-10.0.0.81:22-10.0.0.1:44112.service. Mar 17 18:42:55.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.81:22-10.0.0.1:44112 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:55.425043 kernel: kauditd_printk_skb: 23 callbacks suppressed Mar 17 18:42:55.425112 kernel: audit: type=1130 audit(1742236975.420:328): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.81:22-10.0.0.1:44112 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:55.451000 audit[3733]: USER_ACCT pid=3733 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:55.453025 sshd[3733]: Accepted publickey for core from 10.0.0.1 port 44112 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:42:55.457000 audit[3733]: CRED_ACQ pid=3733 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:55.458248 sshd[3733]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:42:55.461369 systemd-logind[1293]: New session 14 of user core. Mar 17 18:42:55.462052 systemd[1]: Started session-14.scope. Mar 17 18:42:55.463050 kernel: audit: type=1101 audit(1742236975.451:329): pid=3733 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:55.463112 kernel: audit: type=1103 audit(1742236975.457:330): pid=3733 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:55.466499 kernel: audit: type=1006 audit(1742236975.457:331): pid=3733 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Mar 17 18:42:55.466538 kernel: audit: type=1300 audit(1742236975.457:331): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd3eec0020 a2=3 a3=0 items=0 ppid=1 pid=3733 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:55.457000 audit[3733]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd3eec0020 a2=3 a3=0 items=0 ppid=1 pid=3733 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:55.457000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:42:55.473631 kernel: audit: type=1327 audit(1742236975.457:331): proctitle=737368643A20636F7265205B707269765D Mar 17 18:42:55.473663 kernel: audit: type=1105 audit(1742236975.464:332): pid=3733 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:55.464000 audit[3733]: USER_START pid=3733 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:55.465000 audit[3736]: CRED_ACQ pid=3736 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:55.483664 kernel: audit: type=1103 audit(1742236975.465:333): pid=3736 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:55.560658 sshd[3733]: pam_unix(sshd:session): session closed for user core Mar 17 18:42:55.560000 audit[3733]: USER_END pid=3733 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:55.562937 systemd[1]: sshd@13-10.0.0.81:22-10.0.0.1:44112.service: Deactivated successfully. Mar 17 18:42:55.563817 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 18:42:55.564909 systemd-logind[1293]: Session 14 logged out. Waiting for processes to exit. Mar 17 18:42:55.565754 systemd-logind[1293]: Removed session 14. Mar 17 18:42:55.561000 audit[3733]: CRED_DISP pid=3733 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:55.569478 kernel: audit: type=1106 audit(1742236975.560:334): pid=3733 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:55.569607 kernel: audit: type=1104 audit(1742236975.561:335): pid=3733 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:42:55.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.81:22-10.0.0.1:44112 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:43:00.564365 systemd[1]: Started sshd@14-10.0.0.81:22-10.0.0.1:54114.service. Mar 17 18:43:00.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.81:22-10.0.0.1:54114 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:43:00.565884 kernel: kauditd_printk_skb: 1 callbacks suppressed Mar 17 18:43:00.565931 kernel: audit: type=1130 audit(1742236980.563:337): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.81:22-10.0.0.1:54114 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:43:00.592000 audit[3747]: USER_ACCT pid=3747 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:00.593209 sshd[3747]: Accepted publickey for core from 10.0.0.1 port 54114 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:43:00.596000 audit[3747]: CRED_ACQ pid=3747 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:00.597409 sshd[3747]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:00.601686 kernel: audit: type=1101 audit(1742236980.592:338): pid=3747 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:00.601745 kernel: audit: type=1103 audit(1742236980.596:339): pid=3747 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:00.601775 kernel: audit: type=1006 audit(1742236980.596:340): pid=3747 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Mar 17 18:43:00.601556 systemd[1]: Started session-15.scope. Mar 17 18:43:00.601935 systemd-logind[1293]: New session 15 of user core. Mar 17 18:43:00.596000 audit[3747]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcf292a320 a2=3 a3=0 items=0 ppid=1 pid=3747 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:00.607034 kernel: audit: type=1300 audit(1742236980.596:340): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcf292a320 a2=3 a3=0 items=0 ppid=1 pid=3747 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:00.607071 kernel: audit: type=1327 audit(1742236980.596:340): proctitle=737368643A20636F7265205B707269765D Mar 17 18:43:00.596000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:43:00.606000 audit[3747]: USER_START pid=3747 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:00.612679 kernel: audit: type=1105 audit(1742236980.606:341): pid=3747 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:00.612722 kernel: audit: type=1103 audit(1742236980.607:342): pid=3750 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:00.607000 audit[3750]: CRED_ACQ pid=3750 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:00.701321 sshd[3747]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:00.701000 audit[3747]: USER_END pid=3747 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:00.703715 systemd[1]: sshd@14-10.0.0.81:22-10.0.0.1:54114.service: Deactivated successfully. Mar 17 18:43:00.704396 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 18:43:00.705328 systemd-logind[1293]: Session 15 logged out. Waiting for processes to exit. Mar 17 18:43:00.706047 systemd-logind[1293]: Removed session 15. Mar 17 18:43:00.701000 audit[3747]: CRED_DISP pid=3747 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:00.710053 kernel: audit: type=1106 audit(1742236980.701:343): pid=3747 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:00.710113 kernel: audit: type=1104 audit(1742236980.701:344): pid=3747 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:00.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.81:22-10.0.0.1:54114 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:43:01.118246 kubelet[2216]: I0317 18:43:01.117577 2216 scope.go:117] "RemoveContainer" containerID="00afc36359e61d6649e411fe8495999ad9b7590e53cc1bfde17f4bc3bae681fa" Mar 17 18:43:01.118246 kubelet[2216]: E0317 18:43:01.117674 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:01.118631 env[1309]: time="2025-03-17T18:43:01.118027052Z" level=info msg="StopPodSandbox for \"a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb\"" Mar 17 18:43:01.120797 env[1309]: time="2025-03-17T18:43:01.120745624Z" level=info msg="CreateContainer within sandbox \"546d66e230280017c9e83f127fea57d74b165ca11c271d28d9e919c656ba2223\" for container &ContainerMetadata{Name:calico-node,Attempt:2,}" Mar 17 18:43:01.133246 env[1309]: time="2025-03-17T18:43:01.133184768Z" level=info msg="CreateContainer within sandbox \"546d66e230280017c9e83f127fea57d74b165ca11c271d28d9e919c656ba2223\" for &ContainerMetadata{Name:calico-node,Attempt:2,} returns container id \"f2713168ed41f83a4cf3fd555ca409234671d672a172880f58247770d80c4be4\"" Mar 17 18:43:01.133975 env[1309]: time="2025-03-17T18:43:01.133951074Z" level=info msg="StartContainer for \"f2713168ed41f83a4cf3fd555ca409234671d672a172880f58247770d80c4be4\"" Mar 17 18:43:01.147681 env[1309]: time="2025-03-17T18:43:01.147620320Z" level=error msg="StopPodSandbox for \"a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb\" failed" error="failed to destroy network for sandbox \"a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:43:01.147907 kubelet[2216]: E0317 18:43:01.147850 2216 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb" Mar 17 18:43:01.147985 kubelet[2216]: E0317 18:43:01.147918 2216 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb"} Mar 17 18:43:01.147985 kubelet[2216]: E0317 18:43:01.147950 2216 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"06aa47e7-14c4-4c99-9d64-88ed0bae7c98\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 17 18:43:01.147985 kubelet[2216]: E0317 18:43:01.147971 2216 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"06aa47e7-14c4-4c99-9d64-88ed0bae7c98\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rwbh5" podUID="06aa47e7-14c4-4c99-9d64-88ed0bae7c98" Mar 17 18:43:01.181342 env[1309]: time="2025-03-17T18:43:01.181300426Z" level=info msg="StartContainer for \"f2713168ed41f83a4cf3fd555ca409234671d672a172880f58247770d80c4be4\" returns successfully" Mar 17 18:43:01.243216 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2713168ed41f83a4cf3fd555ca409234671d672a172880f58247770d80c4be4-rootfs.mount: Deactivated successfully. Mar 17 18:43:01.248391 env[1309]: time="2025-03-17T18:43:01.248330031Z" level=info msg="shim disconnected" id=f2713168ed41f83a4cf3fd555ca409234671d672a172880f58247770d80c4be4 Mar 17 18:43:01.248538 env[1309]: time="2025-03-17T18:43:01.248398944Z" level=warning msg="cleaning up after shim disconnected" id=f2713168ed41f83a4cf3fd555ca409234671d672a172880f58247770d80c4be4 namespace=k8s.io Mar 17 18:43:01.248538 env[1309]: time="2025-03-17T18:43:01.248415135Z" level=info msg="cleaning up dead shim" Mar 17 18:43:01.257232 env[1309]: time="2025-03-17T18:43:01.257175906Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:43:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3836 runtime=io.containerd.runc.v2\n" Mar 17 18:43:01.857106 kubelet[2216]: I0317 18:43:01.857078 2216 scope.go:117] "RemoveContainer" containerID="00afc36359e61d6649e411fe8495999ad9b7590e53cc1bfde17f4bc3bae681fa" Mar 17 18:43:01.857380 kubelet[2216]: I0317 18:43:01.857360 2216 scope.go:117] "RemoveContainer" containerID="f2713168ed41f83a4cf3fd555ca409234671d672a172880f58247770d80c4be4" Mar 17 18:43:01.857427 kubelet[2216]: E0317 18:43:01.857421 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:01.857795 kubelet[2216]: E0317 18:43:01.857766 2216 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-lr4b4_calico-system(8fbc69a3-8524-4440-a2d9-778d5b73de1b)\"" pod="calico-system/calico-node-lr4b4" podUID="8fbc69a3-8524-4440-a2d9-778d5b73de1b" Mar 17 18:43:01.857981 env[1309]: time="2025-03-17T18:43:01.857839323Z" level=info msg="RemoveContainer for \"00afc36359e61d6649e411fe8495999ad9b7590e53cc1bfde17f4bc3bae681fa\"" Mar 17 18:43:01.861112 env[1309]: time="2025-03-17T18:43:01.861063961Z" level=info msg="RemoveContainer for \"00afc36359e61d6649e411fe8495999ad9b7590e53cc1bfde17f4bc3bae681fa\" returns successfully" Mar 17 18:43:03.117183 env[1309]: time="2025-03-17T18:43:03.117135689Z" level=info msg="StopPodSandbox for \"f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af\"" Mar 17 18:43:03.140211 env[1309]: time="2025-03-17T18:43:03.140139284Z" level=error msg="StopPodSandbox for \"f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af\" failed" error="failed to destroy network for sandbox \"f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:43:03.140447 kubelet[2216]: E0317 18:43:03.140405 2216 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af" Mar 17 18:43:03.140719 kubelet[2216]: E0317 18:43:03.140458 2216 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af"} Mar 17 18:43:03.140719 kubelet[2216]: E0317 18:43:03.140492 2216 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"182af1a9-314d-44fe-9729-65e5c19acc5a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 17 18:43:03.140719 kubelet[2216]: E0317 18:43:03.140514 2216 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"182af1a9-314d-44fe-9729-65e5c19acc5a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-qvfwr" podUID="182af1a9-314d-44fe-9729-65e5c19acc5a" Mar 17 18:43:04.117398 env[1309]: time="2025-03-17T18:43:04.117335829Z" level=info msg="StopPodSandbox for \"92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3\"" Mar 17 18:43:04.139150 env[1309]: time="2025-03-17T18:43:04.139083857Z" level=error msg="StopPodSandbox for \"92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3\" failed" error="failed to destroy network for sandbox \"92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:43:04.139334 kubelet[2216]: E0317 18:43:04.139289 2216 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3" Mar 17 18:43:04.139385 kubelet[2216]: E0317 18:43:04.139344 2216 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3"} Mar 17 18:43:04.139411 kubelet[2216]: E0317 18:43:04.139386 2216 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fe03e3e2-d761-40da-81d3-dd75baa1eeea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 17 18:43:04.139479 kubelet[2216]: E0317 18:43:04.139415 2216 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fe03e3e2-d761-40da-81d3-dd75baa1eeea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9bd99c456-c2kg7" podUID="fe03e3e2-d761-40da-81d3-dd75baa1eeea" Mar 17 18:43:05.118249 env[1309]: time="2025-03-17T18:43:05.118185905Z" level=info msg="StopPodSandbox for \"f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9\"" Mar 17 18:43:05.118800 env[1309]: time="2025-03-17T18:43:05.118347115Z" level=info msg="StopPodSandbox for \"2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7\"" Mar 17 18:43:05.118800 env[1309]: time="2025-03-17T18:43:05.118198288Z" level=info msg="StopPodSandbox for \"2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035\"" Mar 17 18:43:05.145193 env[1309]: time="2025-03-17T18:43:05.145116549Z" level=error msg="StopPodSandbox for \"2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035\" failed" error="failed to destroy network for sandbox \"2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:43:05.145447 kubelet[2216]: E0317 18:43:05.145405 2216 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035" Mar 17 18:43:05.145727 kubelet[2216]: E0317 18:43:05.145457 2216 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035"} Mar 17 18:43:05.145727 kubelet[2216]: E0317 18:43:05.145487 2216 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5a7c78d5-a1ee-4010-a9a6-a696fb4c1f04\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 17 18:43:05.145727 kubelet[2216]: E0317 18:43:05.145508 2216 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5a7c78d5-a1ee-4010-a9a6-a696fb4c1f04\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-k6vcp" podUID="5a7c78d5-a1ee-4010-a9a6-a696fb4c1f04" Mar 17 18:43:05.154210 env[1309]: time="2025-03-17T18:43:05.154168590Z" level=error msg="StopPodSandbox for \"f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9\" failed" error="failed to destroy network for sandbox \"f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:43:05.154392 kubelet[2216]: E0317 18:43:05.154283 2216 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9" Mar 17 18:43:05.154392 kubelet[2216]: E0317 18:43:05.154313 2216 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9"} Mar 17 18:43:05.154392 kubelet[2216]: E0317 18:43:05.154334 2216 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7b65dbfa-20f5-4d4f-8c03-972a550ae421\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 17 18:43:05.154392 kubelet[2216]: E0317 18:43:05.154351 2216 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7b65dbfa-20f5-4d4f-8c03-972a550ae421\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75cb744445-d9hht" podUID="7b65dbfa-20f5-4d4f-8c03-972a550ae421" Mar 17 18:43:05.159546 env[1309]: time="2025-03-17T18:43:05.159503617Z" level=error msg="StopPodSandbox for \"2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7\" failed" error="failed to destroy network for sandbox \"2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:43:05.159732 kubelet[2216]: E0317 18:43:05.159675 2216 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7" Mar 17 18:43:05.159790 kubelet[2216]: E0317 18:43:05.159736 2216 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7"} Mar 17 18:43:05.159790 kubelet[2216]: E0317 18:43:05.159770 2216 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e63620b8-dbf8-4096-bfe9-5ecbfa441daf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 17 18:43:05.159886 kubelet[2216]: E0317 18:43:05.159794 2216 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e63620b8-dbf8-4096-bfe9-5ecbfa441daf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9bd99c456-2chqm" podUID="e63620b8-dbf8-4096-bfe9-5ecbfa441daf" Mar 17 18:43:05.704291 systemd[1]: Started sshd@15-10.0.0.81:22-10.0.0.1:57064.service. Mar 17 18:43:05.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.81:22-10.0.0.1:57064 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:43:05.705465 kernel: kauditd_printk_skb: 1 callbacks suppressed Mar 17 18:43:05.705533 kernel: audit: type=1130 audit(1742236985.702:346): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.81:22-10.0.0.1:57064 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:43:05.734000 audit[3969]: USER_ACCT pid=3969 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:05.736392 sshd[3969]: Accepted publickey for core from 10.0.0.1 port 57064 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:43:05.738514 sshd[3969]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:05.736000 audit[3969]: CRED_ACQ pid=3969 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:05.742251 systemd-logind[1293]: New session 16 of user core. Mar 17 18:43:05.743108 systemd[1]: Started session-16.scope. Mar 17 18:43:05.747589 kernel: audit: type=1101 audit(1742236985.734:347): pid=3969 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:05.747704 kernel: audit: type=1103 audit(1742236985.736:348): pid=3969 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:05.747723 kernel: audit: type=1006 audit(1742236985.736:349): pid=3969 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Mar 17 18:43:05.736000 audit[3969]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcb1074f50 a2=3 a3=0 items=0 ppid=1 pid=3969 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:05.751872 kernel: audit: type=1300 audit(1742236985.736:349): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcb1074f50 a2=3 a3=0 items=0 ppid=1 pid=3969 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:05.751935 kernel: audit: type=1327 audit(1742236985.736:349): proctitle=737368643A20636F7265205B707269765D Mar 17 18:43:05.736000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:43:05.745000 audit[3969]: USER_START pid=3969 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:05.757324 kernel: audit: type=1105 audit(1742236985.745:350): pid=3969 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:05.757380 kernel: audit: type=1103 audit(1742236985.746:351): pid=3972 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:05.746000 audit[3972]: CRED_ACQ pid=3972 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:05.860096 sshd[3969]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:05.859000 audit[3969]: USER_END pid=3969 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:05.859000 audit[3969]: CRED_DISP pid=3969 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:05.862528 systemd[1]: sshd@15-10.0.0.81:22-10.0.0.1:57064.service: Deactivated successfully. Mar 17 18:43:05.863263 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 18:43:05.868807 kernel: audit: type=1106 audit(1742236985.859:352): pid=3969 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:05.868871 kernel: audit: type=1104 audit(1742236985.859:353): pid=3969 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:05.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.81:22-10.0.0.1:57064 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:43:05.869418 systemd-logind[1293]: Session 16 logged out. Waiting for processes to exit. Mar 17 18:43:05.870282 systemd-logind[1293]: Removed session 16. Mar 17 18:43:09.654648 kubelet[2216]: I0317 18:43:09.654588 2216 scope.go:117] "RemoveContainer" containerID="f2713168ed41f83a4cf3fd555ca409234671d672a172880f58247770d80c4be4" Mar 17 18:43:09.655167 kubelet[2216]: E0317 18:43:09.654670 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:09.655167 kubelet[2216]: E0317 18:43:09.655037 2216 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-lr4b4_calico-system(8fbc69a3-8524-4440-a2d9-778d5b73de1b)\"" pod="calico-system/calico-node-lr4b4" podUID="8fbc69a3-8524-4440-a2d9-778d5b73de1b" Mar 17 18:43:10.862833 systemd[1]: Started sshd@16-10.0.0.81:22-10.0.0.1:57080.service. Mar 17 18:43:10.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.81:22-10.0.0.1:57080 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:43:10.863883 kernel: kauditd_printk_skb: 1 callbacks suppressed Mar 17 18:43:10.863944 kernel: audit: type=1130 audit(1742236990.862:355): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.81:22-10.0.0.1:57080 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:43:10.893000 audit[3983]: USER_ACCT pid=3983 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:10.894068 sshd[3983]: Accepted publickey for core from 10.0.0.1 port 57080 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:43:10.896000 audit[3983]: CRED_ACQ pid=3983 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:10.898058 sshd[3983]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:10.901396 kernel: audit: type=1101 audit(1742236990.893:356): pid=3983 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:10.901448 kernel: audit: type=1103 audit(1742236990.896:357): pid=3983 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:10.901502 kernel: audit: type=1006 audit(1742236990.896:358): pid=3983 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Mar 17 18:43:10.901767 systemd-logind[1293]: New session 17 of user core. Mar 17 18:43:10.902909 systemd[1]: Started session-17.scope. Mar 17 18:43:10.896000 audit[3983]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffce7ec1540 a2=3 a3=0 items=0 ppid=1 pid=3983 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:10.907716 kernel: audit: type=1300 audit(1742236990.896:358): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffce7ec1540 a2=3 a3=0 items=0 ppid=1 pid=3983 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:10.907784 kernel: audit: type=1327 audit(1742236990.896:358): proctitle=737368643A20636F7265205B707269765D Mar 17 18:43:10.896000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:43:10.908846 kernel: audit: type=1105 audit(1742236990.907:359): pid=3983 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:10.907000 audit[3983]: USER_START pid=3983 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:10.913015 kernel: audit: type=1103 audit(1742236990.908:360): pid=3986 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:10.908000 audit[3986]: CRED_ACQ pid=3986 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:11.004082 sshd[3983]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:11.004000 audit[3983]: USER_END pid=3983 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:11.006409 systemd[1]: sshd@16-10.0.0.81:22-10.0.0.1:57080.service: Deactivated successfully. Mar 17 18:43:11.007143 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 18:43:11.004000 audit[3983]: CRED_DISP pid=3983 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:11.010523 systemd-logind[1293]: Session 17 logged out. Waiting for processes to exit. Mar 17 18:43:11.011261 systemd-logind[1293]: Removed session 17. Mar 17 18:43:11.012822 kernel: audit: type=1106 audit(1742236991.004:361): pid=3983 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:11.012887 kernel: audit: type=1104 audit(1742236991.004:362): pid=3983 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:11.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.81:22-10.0.0.1:57080 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:43:12.117317 kubelet[2216]: E0317 18:43:12.117243 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:14.117413 env[1309]: time="2025-03-17T18:43:14.117361986Z" level=info msg="StopPodSandbox for \"a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb\"" Mar 17 18:43:14.139221 env[1309]: time="2025-03-17T18:43:14.139166432Z" level=error msg="StopPodSandbox for \"a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb\" failed" error="failed to destroy network for sandbox \"a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:43:14.139457 kubelet[2216]: E0317 18:43:14.139411 2216 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb" Mar 17 18:43:14.139746 kubelet[2216]: E0317 18:43:14.139471 2216 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb"} Mar 17 18:43:14.139746 kubelet[2216]: E0317 18:43:14.139510 2216 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"06aa47e7-14c4-4c99-9d64-88ed0bae7c98\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 17 18:43:14.139746 kubelet[2216]: E0317 18:43:14.139537 2216 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"06aa47e7-14c4-4c99-9d64-88ed0bae7c98\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rwbh5" podUID="06aa47e7-14c4-4c99-9d64-88ed0bae7c98" Mar 17 18:43:15.118055 env[1309]: time="2025-03-17T18:43:15.118000300Z" level=info msg="StopPodSandbox for \"f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af\"" Mar 17 18:43:15.140711 env[1309]: time="2025-03-17T18:43:15.140647746Z" level=error msg="StopPodSandbox for \"f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af\" failed" error="failed to destroy network for sandbox \"f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:43:15.140971 kubelet[2216]: E0317 18:43:15.140915 2216 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af" Mar 17 18:43:15.141298 kubelet[2216]: E0317 18:43:15.140972 2216 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af"} Mar 17 18:43:15.141298 kubelet[2216]: E0317 18:43:15.141007 2216 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"182af1a9-314d-44fe-9729-65e5c19acc5a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 17 18:43:15.141298 kubelet[2216]: E0317 18:43:15.141031 2216 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"182af1a9-314d-44fe-9729-65e5c19acc5a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-qvfwr" podUID="182af1a9-314d-44fe-9729-65e5c19acc5a" Mar 17 18:43:16.007028 systemd[1]: Started sshd@17-10.0.0.81:22-10.0.0.1:36290.service. Mar 17 18:43:16.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.81:22-10.0.0.1:36290 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:43:16.008063 kernel: kauditd_printk_skb: 1 callbacks suppressed Mar 17 18:43:16.008149 kernel: audit: type=1130 audit(1742236996.006:364): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.81:22-10.0.0.1:36290 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:43:16.038000 audit[4048]: USER_ACCT pid=4048 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:16.039565 sshd[4048]: Accepted publickey for core from 10.0.0.1 port 36290 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:43:16.041431 sshd[4048]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:16.040000 audit[4048]: CRED_ACQ pid=4048 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:16.044944 systemd-logind[1293]: New session 18 of user core. Mar 17 18:43:16.045609 systemd[1]: Started session-18.scope. Mar 17 18:43:16.046793 kernel: audit: type=1101 audit(1742236996.038:365): pid=4048 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:16.046841 kernel: audit: type=1103 audit(1742236996.040:366): pid=4048 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:16.046882 kernel: audit: type=1006 audit(1742236996.040:367): pid=4048 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Mar 17 18:43:16.040000 audit[4048]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdbf4329c0 a2=3 a3=0 items=0 ppid=1 pid=4048 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:16.052916 kernel: audit: type=1300 audit(1742236996.040:367): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdbf4329c0 a2=3 a3=0 items=0 ppid=1 pid=4048 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:16.052957 kernel: audit: type=1327 audit(1742236996.040:367): proctitle=737368643A20636F7265205B707269765D Mar 17 18:43:16.040000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:43:16.049000 audit[4048]: USER_START pid=4048 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:16.058334 kernel: audit: type=1105 audit(1742236996.049:368): pid=4048 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:16.058376 kernel: audit: type=1103 audit(1742236996.050:369): pid=4051 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:16.050000 audit[4051]: CRED_ACQ pid=4051 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:16.117843 env[1309]: time="2025-03-17T18:43:16.117802214Z" level=info msg="StopPodSandbox for \"f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9\"" Mar 17 18:43:16.144396 env[1309]: time="2025-03-17T18:43:16.144317238Z" level=error msg="StopPodSandbox for \"f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9\" failed" error="failed to destroy network for sandbox \"f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:43:16.144845 kubelet[2216]: E0317 18:43:16.144574 2216 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9" Mar 17 18:43:16.144845 kubelet[2216]: E0317 18:43:16.144637 2216 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9"} Mar 17 18:43:16.144845 kubelet[2216]: E0317 18:43:16.144682 2216 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7b65dbfa-20f5-4d4f-8c03-972a550ae421\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 17 18:43:16.144845 kubelet[2216]: E0317 18:43:16.144710 2216 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7b65dbfa-20f5-4d4f-8c03-972a550ae421\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75cb744445-d9hht" podUID="7b65dbfa-20f5-4d4f-8c03-972a550ae421" Mar 17 18:43:16.152604 sshd[4048]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:16.152000 audit[4048]: USER_END pid=4048 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:16.154803 systemd[1]: sshd@17-10.0.0.81:22-10.0.0.1:36290.service: Deactivated successfully. Mar 17 18:43:16.155664 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 18:43:16.152000 audit[4048]: CRED_DISP pid=4048 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:16.160560 systemd-logind[1293]: Session 18 logged out. Waiting for processes to exit. Mar 17 18:43:16.161123 kernel: audit: type=1106 audit(1742236996.152:370): pid=4048 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:16.161180 kernel: audit: type=1104 audit(1742236996.152:371): pid=4048 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:16.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.81:22-10.0.0.1:36290 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:43:16.161351 systemd-logind[1293]: Removed session 18. Mar 17 18:43:16.962376 env[1309]: time="2025-03-17T18:43:16.962330656Z" level=info msg="StopPodSandbox for \"546d66e230280017c9e83f127fea57d74b165ca11c271d28d9e919c656ba2223\"" Mar 17 18:43:16.962722 env[1309]: time="2025-03-17T18:43:16.962636841Z" level=info msg="Container to stop \"c5463d9ab9e623397d367c99e8df2d80c48f068c9228c838f5b80b0c703661b0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:43:16.962888 env[1309]: time="2025-03-17T18:43:16.962839227Z" level=info msg="Container to stop \"f2713168ed41f83a4cf3fd555ca409234671d672a172880f58247770d80c4be4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:43:16.963022 env[1309]: time="2025-03-17T18:43:16.962994884Z" level=info msg="Container to stop \"450947b9bc2e2e827996f8d04d360bd8c560b64df91796f188454627c5fbd52a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:43:16.969155 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-546d66e230280017c9e83f127fea57d74b165ca11c271d28d9e919c656ba2223-shm.mount: Deactivated successfully. Mar 17 18:43:17.011298 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-546d66e230280017c9e83f127fea57d74b165ca11c271d28d9e919c656ba2223-rootfs.mount: Deactivated successfully. Mar 17 18:43:17.023068 env[1309]: time="2025-03-17T18:43:17.023013031Z" level=info msg="shim disconnected" id=546d66e230280017c9e83f127fea57d74b165ca11c271d28d9e919c656ba2223 Mar 17 18:43:17.023068 env[1309]: time="2025-03-17T18:43:17.023068648Z" level=warning msg="cleaning up after shim disconnected" id=546d66e230280017c9e83f127fea57d74b165ca11c271d28d9e919c656ba2223 namespace=k8s.io Mar 17 18:43:17.023068 env[1309]: time="2025-03-17T18:43:17.023079698Z" level=info msg="cleaning up dead shim" Mar 17 18:43:17.030837 env[1309]: time="2025-03-17T18:43:17.030779691Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:43:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4105 runtime=io.containerd.runc.v2\n" Mar 17 18:43:17.031161 env[1309]: time="2025-03-17T18:43:17.031126674Z" level=info msg="TearDown network for sandbox \"546d66e230280017c9e83f127fea57d74b165ca11c271d28d9e919c656ba2223\" successfully" Mar 17 18:43:17.031161 env[1309]: time="2025-03-17T18:43:17.031154337Z" level=info msg="StopPodSandbox for \"546d66e230280017c9e83f127fea57d74b165ca11c271d28d9e919c656ba2223\" returns successfully" Mar 17 18:43:17.063887 kubelet[2216]: I0317 18:43:17.063825 2216 topology_manager.go:215] "Topology Admit Handler" podUID="dcaa6d51-4bb2-4f0b-bc18-fa719f042bd8" podNamespace="calico-system" podName="calico-node-m8c5b" Mar 17 18:43:17.064074 kubelet[2216]: E0317 18:43:17.063912 2216 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8fbc69a3-8524-4440-a2d9-778d5b73de1b" containerName="calico-node" Mar 17 18:43:17.064074 kubelet[2216]: E0317 18:43:17.063923 2216 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8fbc69a3-8524-4440-a2d9-778d5b73de1b" containerName="flexvol-driver" Mar 17 18:43:17.064074 kubelet[2216]: E0317 18:43:17.063930 2216 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8fbc69a3-8524-4440-a2d9-778d5b73de1b" containerName="install-cni" Mar 17 18:43:17.064074 kubelet[2216]: E0317 18:43:17.063936 2216 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8fbc69a3-8524-4440-a2d9-778d5b73de1b" containerName="calico-node" Mar 17 18:43:17.064074 kubelet[2216]: E0317 18:43:17.063943 2216 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8fbc69a3-8524-4440-a2d9-778d5b73de1b" containerName="calico-node" Mar 17 18:43:17.064074 kubelet[2216]: I0317 18:43:17.063971 2216 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fbc69a3-8524-4440-a2d9-778d5b73de1b" containerName="calico-node" Mar 17 18:43:17.064074 kubelet[2216]: I0317 18:43:17.063978 2216 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fbc69a3-8524-4440-a2d9-778d5b73de1b" containerName="calico-node" Mar 17 18:43:17.064074 kubelet[2216]: I0317 18:43:17.064018 2216 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fbc69a3-8524-4440-a2d9-778d5b73de1b" containerName="calico-node" Mar 17 18:43:17.127992 kubelet[2216]: I0317 18:43:17.127958 2216 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8fbc69a3-8524-4440-a2d9-778d5b73de1b-lib-modules\") pod \"8fbc69a3-8524-4440-a2d9-778d5b73de1b\" (UID: \"8fbc69a3-8524-4440-a2d9-778d5b73de1b\") " Mar 17 18:43:17.127992 kubelet[2216]: I0317 18:43:17.127986 2216 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8fbc69a3-8524-4440-a2d9-778d5b73de1b-cni-log-dir\") pod \"8fbc69a3-8524-4440-a2d9-778d5b73de1b\" (UID: \"8fbc69a3-8524-4440-a2d9-778d5b73de1b\") " Mar 17 18:43:17.127992 kubelet[2216]: I0317 18:43:17.127998 2216 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8fbc69a3-8524-4440-a2d9-778d5b73de1b-cni-bin-dir\") pod \"8fbc69a3-8524-4440-a2d9-778d5b73de1b\" (UID: \"8fbc69a3-8524-4440-a2d9-778d5b73de1b\") " Mar 17 18:43:17.128185 kubelet[2216]: I0317 18:43:17.128013 2216 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8fbc69a3-8524-4440-a2d9-778d5b73de1b-cni-net-dir\") pod \"8fbc69a3-8524-4440-a2d9-778d5b73de1b\" (UID: \"8fbc69a3-8524-4440-a2d9-778d5b73de1b\") " Mar 17 18:43:17.128185 kubelet[2216]: I0317 18:43:17.128031 2216 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fbc69a3-8524-4440-a2d9-778d5b73de1b-tigera-ca-bundle\") pod \"8fbc69a3-8524-4440-a2d9-778d5b73de1b\" (UID: \"8fbc69a3-8524-4440-a2d9-778d5b73de1b\") " Mar 17 18:43:17.128185 kubelet[2216]: I0317 18:43:17.128017 2216 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fbc69a3-8524-4440-a2d9-778d5b73de1b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8fbc69a3-8524-4440-a2d9-778d5b73de1b" (UID: "8fbc69a3-8524-4440-a2d9-778d5b73de1b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:43:17.128185 kubelet[2216]: I0317 18:43:17.128043 2216 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8fbc69a3-8524-4440-a2d9-778d5b73de1b-var-run-calico\") pod \"8fbc69a3-8524-4440-a2d9-778d5b73de1b\" (UID: \"8fbc69a3-8524-4440-a2d9-778d5b73de1b\") " Mar 17 18:43:17.128185 kubelet[2216]: I0317 18:43:17.128069 2216 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fbc69a3-8524-4440-a2d9-778d5b73de1b-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "8fbc69a3-8524-4440-a2d9-778d5b73de1b" (UID: "8fbc69a3-8524-4440-a2d9-778d5b73de1b"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:43:17.128323 kubelet[2216]: I0317 18:43:17.128092 2216 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fbc69a3-8524-4440-a2d9-778d5b73de1b-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "8fbc69a3-8524-4440-a2d9-778d5b73de1b" (UID: "8fbc69a3-8524-4440-a2d9-778d5b73de1b"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:43:17.128323 kubelet[2216]: I0317 18:43:17.128104 2216 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fbc69a3-8524-4440-a2d9-778d5b73de1b-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "8fbc69a3-8524-4440-a2d9-778d5b73de1b" (UID: "8fbc69a3-8524-4440-a2d9-778d5b73de1b"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:43:17.128323 kubelet[2216]: I0317 18:43:17.128107 2216 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8fbc69a3-8524-4440-a2d9-778d5b73de1b-var-lib-calico\") pod \"8fbc69a3-8524-4440-a2d9-778d5b73de1b\" (UID: \"8fbc69a3-8524-4440-a2d9-778d5b73de1b\") " Mar 17 18:43:17.128323 kubelet[2216]: I0317 18:43:17.128116 2216 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fbc69a3-8524-4440-a2d9-778d5b73de1b-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "8fbc69a3-8524-4440-a2d9-778d5b73de1b" (UID: "8fbc69a3-8524-4440-a2d9-778d5b73de1b"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:43:17.128323 kubelet[2216]: I0317 18:43:17.128136 2216 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mssg2\" (UniqueName: \"kubernetes.io/projected/8fbc69a3-8524-4440-a2d9-778d5b73de1b-kube-api-access-mssg2\") pod \"8fbc69a3-8524-4440-a2d9-778d5b73de1b\" (UID: \"8fbc69a3-8524-4440-a2d9-778d5b73de1b\") " Mar 17 18:43:17.128439 kubelet[2216]: I0317 18:43:17.128151 2216 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8fbc69a3-8524-4440-a2d9-778d5b73de1b-flexvol-driver-host\") pod \"8fbc69a3-8524-4440-a2d9-778d5b73de1b\" (UID: \"8fbc69a3-8524-4440-a2d9-778d5b73de1b\") " Mar 17 18:43:17.128439 kubelet[2216]: I0317 18:43:17.128164 2216 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8fbc69a3-8524-4440-a2d9-778d5b73de1b-xtables-lock\") pod \"8fbc69a3-8524-4440-a2d9-778d5b73de1b\" (UID: \"8fbc69a3-8524-4440-a2d9-778d5b73de1b\") " Mar 17 18:43:17.128439 kubelet[2216]: I0317 18:43:17.128206 2216 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8fbc69a3-8524-4440-a2d9-778d5b73de1b-node-certs\") pod \"8fbc69a3-8524-4440-a2d9-778d5b73de1b\" (UID: \"8fbc69a3-8524-4440-a2d9-778d5b73de1b\") " Mar 17 18:43:17.128439 kubelet[2216]: I0317 18:43:17.128221 2216 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8fbc69a3-8524-4440-a2d9-778d5b73de1b-policysync\") pod \"8fbc69a3-8524-4440-a2d9-778d5b73de1b\" (UID: \"8fbc69a3-8524-4440-a2d9-778d5b73de1b\") " Mar 17 18:43:17.128439 kubelet[2216]: I0317 18:43:17.128283 2216 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8fbc69a3-8524-4440-a2d9-778d5b73de1b-cni-net-dir\") on node \"localhost\" DevicePath \"\"" Mar 17 18:43:17.128439 kubelet[2216]: I0317 18:43:17.128291 2216 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8fbc69a3-8524-4440-a2d9-778d5b73de1b-var-run-calico\") on node \"localhost\" DevicePath \"\"" Mar 17 18:43:17.128439 kubelet[2216]: I0317 18:43:17.128298 2216 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8fbc69a3-8524-4440-a2d9-778d5b73de1b-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 17 18:43:17.128593 kubelet[2216]: I0317 18:43:17.128306 2216 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8fbc69a3-8524-4440-a2d9-778d5b73de1b-cni-log-dir\") on node \"localhost\" DevicePath \"\"" Mar 17 18:43:17.128593 kubelet[2216]: I0317 18:43:17.128322 2216 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8fbc69a3-8524-4440-a2d9-778d5b73de1b-cni-bin-dir\") on node \"localhost\" DevicePath \"\"" Mar 17 18:43:17.128593 kubelet[2216]: I0317 18:43:17.128343 2216 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fbc69a3-8524-4440-a2d9-778d5b73de1b-policysync" (OuterVolumeSpecName: "policysync") pod "8fbc69a3-8524-4440-a2d9-778d5b73de1b" (UID: "8fbc69a3-8524-4440-a2d9-778d5b73de1b"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:43:17.128593 kubelet[2216]: I0317 18:43:17.128361 2216 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fbc69a3-8524-4440-a2d9-778d5b73de1b-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "8fbc69a3-8524-4440-a2d9-778d5b73de1b" (UID: "8fbc69a3-8524-4440-a2d9-778d5b73de1b"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:43:17.128685 kubelet[2216]: I0317 18:43:17.128607 2216 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fbc69a3-8524-4440-a2d9-778d5b73de1b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8fbc69a3-8524-4440-a2d9-778d5b73de1b" (UID: "8fbc69a3-8524-4440-a2d9-778d5b73de1b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:43:17.128685 kubelet[2216]: I0317 18:43:17.128634 2216 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fbc69a3-8524-4440-a2d9-778d5b73de1b-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "8fbc69a3-8524-4440-a2d9-778d5b73de1b" (UID: "8fbc69a3-8524-4440-a2d9-778d5b73de1b"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:43:17.132471 systemd[1]: var-lib-kubelet-pods-8fbc69a3\x2d8524\x2d4440\x2da2d9\x2d778d5b73de1b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmssg2.mount: Deactivated successfully. Mar 17 18:43:17.134201 kubelet[2216]: I0317 18:43:17.134175 2216 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fbc69a3-8524-4440-a2d9-778d5b73de1b-kube-api-access-mssg2" (OuterVolumeSpecName: "kube-api-access-mssg2") pod "8fbc69a3-8524-4440-a2d9-778d5b73de1b" (UID: "8fbc69a3-8524-4440-a2d9-778d5b73de1b"). InnerVolumeSpecName "kube-api-access-mssg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:43:17.134471 kubelet[2216]: I0317 18:43:17.134447 2216 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fbc69a3-8524-4440-a2d9-778d5b73de1b-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "8fbc69a3-8524-4440-a2d9-778d5b73de1b" (UID: "8fbc69a3-8524-4440-a2d9-778d5b73de1b"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:43:17.134546 kubelet[2216]: I0317 18:43:17.134531 2216 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fbc69a3-8524-4440-a2d9-778d5b73de1b-node-certs" (OuterVolumeSpecName: "node-certs") pod "8fbc69a3-8524-4440-a2d9-778d5b73de1b" (UID: "8fbc69a3-8524-4440-a2d9-778d5b73de1b"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:43:17.136201 systemd[1]: var-lib-kubelet-pods-8fbc69a3\x2d8524\x2d4440\x2da2d9\x2d778d5b73de1b-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Mar 17 18:43:17.136327 systemd[1]: var-lib-kubelet-pods-8fbc69a3\x2d8524\x2d4440\x2da2d9\x2d778d5b73de1b-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Mar 17 18:43:17.229478 kubelet[2216]: I0317 18:43:17.228821 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/dcaa6d51-4bb2-4f0b-bc18-fa719f042bd8-node-certs\") pod \"calico-node-m8c5b\" (UID: \"dcaa6d51-4bb2-4f0b-bc18-fa719f042bd8\") " pod="calico-system/calico-node-m8c5b" Mar 17 18:43:17.229478 kubelet[2216]: I0317 18:43:17.228885 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/dcaa6d51-4bb2-4f0b-bc18-fa719f042bd8-var-run-calico\") pod \"calico-node-m8c5b\" (UID: \"dcaa6d51-4bb2-4f0b-bc18-fa719f042bd8\") " pod="calico-system/calico-node-m8c5b" Mar 17 18:43:17.229478 kubelet[2216]: I0317 18:43:17.228903 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/dcaa6d51-4bb2-4f0b-bc18-fa719f042bd8-flexvol-driver-host\") pod \"calico-node-m8c5b\" (UID: \"dcaa6d51-4bb2-4f0b-bc18-fa719f042bd8\") " pod="calico-system/calico-node-m8c5b" Mar 17 18:43:17.229478 kubelet[2216]: I0317 18:43:17.228920 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/dcaa6d51-4bb2-4f0b-bc18-fa719f042bd8-policysync\") pod \"calico-node-m8c5b\" (UID: \"dcaa6d51-4bb2-4f0b-bc18-fa719f042bd8\") " pod="calico-system/calico-node-m8c5b" Mar 17 18:43:17.229478 kubelet[2216]: I0317 18:43:17.228934 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/dcaa6d51-4bb2-4f0b-bc18-fa719f042bd8-cni-bin-dir\") pod \"calico-node-m8c5b\" (UID: \"dcaa6d51-4bb2-4f0b-bc18-fa719f042bd8\") " pod="calico-system/calico-node-m8c5b" Mar 17 18:43:17.229969 kubelet[2216]: I0317 18:43:17.228948 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/dcaa6d51-4bb2-4f0b-bc18-fa719f042bd8-cni-log-dir\") pod \"calico-node-m8c5b\" (UID: \"dcaa6d51-4bb2-4f0b-bc18-fa719f042bd8\") " pod="calico-system/calico-node-m8c5b" Mar 17 18:43:17.229969 kubelet[2216]: I0317 18:43:17.229034 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dcaa6d51-4bb2-4f0b-bc18-fa719f042bd8-var-lib-calico\") pod \"calico-node-m8c5b\" (UID: \"dcaa6d51-4bb2-4f0b-bc18-fa719f042bd8\") " pod="calico-system/calico-node-m8c5b" Mar 17 18:43:17.229969 kubelet[2216]: I0317 18:43:17.229095 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/dcaa6d51-4bb2-4f0b-bc18-fa719f042bd8-cni-net-dir\") pod \"calico-node-m8c5b\" (UID: \"dcaa6d51-4bb2-4f0b-bc18-fa719f042bd8\") " pod="calico-system/calico-node-m8c5b" Mar 17 18:43:17.229969 kubelet[2216]: I0317 18:43:17.229114 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dcaa6d51-4bb2-4f0b-bc18-fa719f042bd8-xtables-lock\") pod \"calico-node-m8c5b\" (UID: \"dcaa6d51-4bb2-4f0b-bc18-fa719f042bd8\") " pod="calico-system/calico-node-m8c5b" Mar 17 18:43:17.229969 kubelet[2216]: I0317 18:43:17.229129 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dcaa6d51-4bb2-4f0b-bc18-fa719f042bd8-lib-modules\") pod \"calico-node-m8c5b\" (UID: \"dcaa6d51-4bb2-4f0b-bc18-fa719f042bd8\") " pod="calico-system/calico-node-m8c5b" Mar 17 18:43:17.230101 kubelet[2216]: I0317 18:43:17.229144 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcaa6d51-4bb2-4f0b-bc18-fa719f042bd8-tigera-ca-bundle\") pod \"calico-node-m8c5b\" (UID: \"dcaa6d51-4bb2-4f0b-bc18-fa719f042bd8\") " pod="calico-system/calico-node-m8c5b" Mar 17 18:43:17.230101 kubelet[2216]: I0317 18:43:17.229157 2216 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz4x4\" (UniqueName: \"kubernetes.io/projected/dcaa6d51-4bb2-4f0b-bc18-fa719f042bd8-kube-api-access-sz4x4\") pod \"calico-node-m8c5b\" (UID: \"dcaa6d51-4bb2-4f0b-bc18-fa719f042bd8\") " pod="calico-system/calico-node-m8c5b" Mar 17 18:43:17.230101 kubelet[2216]: I0317 18:43:17.229187 2216 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fbc69a3-8524-4440-a2d9-778d5b73de1b-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 17 18:43:17.230101 kubelet[2216]: I0317 18:43:17.229198 2216 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8fbc69a3-8524-4440-a2d9-778d5b73de1b-var-lib-calico\") on node \"localhost\" DevicePath \"\"" Mar 17 18:43:17.230101 kubelet[2216]: I0317 18:43:17.229205 2216 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-mssg2\" (UniqueName: \"kubernetes.io/projected/8fbc69a3-8524-4440-a2d9-778d5b73de1b-kube-api-access-mssg2\") on node \"localhost\" DevicePath \"\"" Mar 17 18:43:17.230101 kubelet[2216]: I0317 18:43:17.229215 2216 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8fbc69a3-8524-4440-a2d9-778d5b73de1b-flexvol-driver-host\") on node \"localhost\" DevicePath \"\"" Mar 17 18:43:17.230101 kubelet[2216]: I0317 18:43:17.229223 2216 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8fbc69a3-8524-4440-a2d9-778d5b73de1b-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 17 18:43:17.230255 kubelet[2216]: I0317 18:43:17.229230 2216 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8fbc69a3-8524-4440-a2d9-778d5b73de1b-node-certs\") on node \"localhost\" DevicePath \"\"" Mar 17 18:43:17.230255 kubelet[2216]: I0317 18:43:17.229237 2216 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8fbc69a3-8524-4440-a2d9-778d5b73de1b-policysync\") on node \"localhost\" DevicePath \"\"" Mar 17 18:43:17.366644 kubelet[2216]: E0317 18:43:17.366613 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:17.367073 env[1309]: time="2025-03-17T18:43:17.367040920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-m8c5b,Uid:dcaa6d51-4bb2-4f0b-bc18-fa719f042bd8,Namespace:calico-system,Attempt:0,}" Mar 17 18:43:17.379991 env[1309]: time="2025-03-17T18:43:17.379916184Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:43:17.379991 env[1309]: time="2025-03-17T18:43:17.379958746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:43:17.379991 env[1309]: time="2025-03-17T18:43:17.379972101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:43:17.380213 env[1309]: time="2025-03-17T18:43:17.380149289Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7197aa5ecbd63728ee6635c055143ccae64891b1c5e584ea1a829167d37fa4d7 pid=4129 runtime=io.containerd.runc.v2 Mar 17 18:43:17.407987 env[1309]: time="2025-03-17T18:43:17.407947119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-m8c5b,Uid:dcaa6d51-4bb2-4f0b-bc18-fa719f042bd8,Namespace:calico-system,Attempt:0,} returns sandbox id \"7197aa5ecbd63728ee6635c055143ccae64891b1c5e584ea1a829167d37fa4d7\"" Mar 17 18:43:17.408756 kubelet[2216]: E0317 18:43:17.408723 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:17.410598 env[1309]: time="2025-03-17T18:43:17.410564828Z" level=info msg="CreateContainer within sandbox \"7197aa5ecbd63728ee6635c055143ccae64891b1c5e584ea1a829167d37fa4d7\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 17 18:43:17.426454 env[1309]: time="2025-03-17T18:43:17.426388542Z" level=info msg="CreateContainer within sandbox \"7197aa5ecbd63728ee6635c055143ccae64891b1c5e584ea1a829167d37fa4d7\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"89cf123721ce953f598672e03165fced77b1a539f64fdf8a2afaba4381db40a6\"" Mar 17 18:43:17.427073 env[1309]: time="2025-03-17T18:43:17.427048402Z" level=info msg="StartContainer for \"89cf123721ce953f598672e03165fced77b1a539f64fdf8a2afaba4381db40a6\"" Mar 17 18:43:17.475705 env[1309]: time="2025-03-17T18:43:17.475651728Z" level=info msg="StartContainer for \"89cf123721ce953f598672e03165fced77b1a539f64fdf8a2afaba4381db40a6\" returns successfully" Mar 17 18:43:17.515112 env[1309]: time="2025-03-17T18:43:17.514984493Z" level=info msg="shim disconnected" id=89cf123721ce953f598672e03165fced77b1a539f64fdf8a2afaba4381db40a6 Mar 17 18:43:17.515112 env[1309]: time="2025-03-17T18:43:17.515029760Z" level=warning msg="cleaning up after shim disconnected" id=89cf123721ce953f598672e03165fced77b1a539f64fdf8a2afaba4381db40a6 namespace=k8s.io Mar 17 18:43:17.515112 env[1309]: time="2025-03-17T18:43:17.515039428Z" level=info msg="cleaning up dead shim" Mar 17 18:43:17.521015 env[1309]: time="2025-03-17T18:43:17.520964322Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:43:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4209 runtime=io.containerd.runc.v2\n" Mar 17 18:43:17.889575 kubelet[2216]: I0317 18:43:17.889269 2216 scope.go:117] "RemoveContainer" containerID="f2713168ed41f83a4cf3fd555ca409234671d672a172880f58247770d80c4be4" Mar 17 18:43:17.890561 env[1309]: time="2025-03-17T18:43:17.890533610Z" level=info msg="RemoveContainer for \"f2713168ed41f83a4cf3fd555ca409234671d672a172880f58247770d80c4be4\"" Mar 17 18:43:17.892672 kubelet[2216]: E0317 18:43:17.892641 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:17.894243 env[1309]: time="2025-03-17T18:43:17.894209920Z" level=info msg="RemoveContainer for \"f2713168ed41f83a4cf3fd555ca409234671d672a172880f58247770d80c4be4\" returns successfully" Mar 17 18:43:17.894527 kubelet[2216]: I0317 18:43:17.894507 2216 scope.go:117] "RemoveContainer" containerID="450947b9bc2e2e827996f8d04d360bd8c560b64df91796f188454627c5fbd52a" Mar 17 18:43:17.895696 env[1309]: time="2025-03-17T18:43:17.895238012Z" level=info msg="CreateContainer within sandbox \"7197aa5ecbd63728ee6635c055143ccae64891b1c5e584ea1a829167d37fa4d7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 17 18:43:17.896045 env[1309]: time="2025-03-17T18:43:17.896008474Z" level=info msg="RemoveContainer for \"450947b9bc2e2e827996f8d04d360bd8c560b64df91796f188454627c5fbd52a\"" Mar 17 18:43:17.899510 env[1309]: time="2025-03-17T18:43:17.899474693Z" level=info msg="RemoveContainer for \"450947b9bc2e2e827996f8d04d360bd8c560b64df91796f188454627c5fbd52a\" returns successfully" Mar 17 18:43:17.899688 kubelet[2216]: I0317 18:43:17.899606 2216 scope.go:117] "RemoveContainer" containerID="c5463d9ab9e623397d367c99e8df2d80c48f068c9228c838f5b80b0c703661b0" Mar 17 18:43:17.900933 env[1309]: time="2025-03-17T18:43:17.900906366Z" level=info msg="RemoveContainer for \"c5463d9ab9e623397d367c99e8df2d80c48f068c9228c838f5b80b0c703661b0\"" Mar 17 18:43:17.908075 env[1309]: time="2025-03-17T18:43:17.908022234Z" level=info msg="RemoveContainer for \"c5463d9ab9e623397d367c99e8df2d80c48f068c9228c838f5b80b0c703661b0\" returns successfully" Mar 17 18:43:17.915172 env[1309]: time="2025-03-17T18:43:17.915107572Z" level=info msg="CreateContainer within sandbox \"7197aa5ecbd63728ee6635c055143ccae64891b1c5e584ea1a829167d37fa4d7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c958c2a51b71743200ec708f6f19ac9c4925c356ed64200cb45820406164df1d\"" Mar 17 18:43:17.915735 env[1309]: time="2025-03-17T18:43:17.915710734Z" level=info msg="StartContainer for \"c958c2a51b71743200ec708f6f19ac9c4925c356ed64200cb45820406164df1d\"" Mar 17 18:43:17.961658 env[1309]: time="2025-03-17T18:43:17.961588156Z" level=info msg="StartContainer for \"c958c2a51b71743200ec708f6f19ac9c4925c356ed64200cb45820406164df1d\" returns successfully" Mar 17 18:43:18.117178 env[1309]: time="2025-03-17T18:43:18.117134274Z" level=info msg="StopPodSandbox for \"2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035\"" Mar 17 18:43:18.140163 env[1309]: time="2025-03-17T18:43:18.140015424Z" level=error msg="StopPodSandbox for \"2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035\" failed" error="failed to destroy network for sandbox \"2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:43:18.140535 kubelet[2216]: E0317 18:43:18.140482 2216 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035" Mar 17 18:43:18.140620 kubelet[2216]: E0317 18:43:18.140539 2216 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035"} Mar 17 18:43:18.140620 kubelet[2216]: E0317 18:43:18.140580 2216 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5a7c78d5-a1ee-4010-a9a6-a696fb4c1f04\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 17 18:43:18.140620 kubelet[2216]: E0317 18:43:18.140602 2216 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5a7c78d5-a1ee-4010-a9a6-a696fb4c1f04\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-k6vcp" podUID="5a7c78d5-a1ee-4010-a9a6-a696fb4c1f04" Mar 17 18:43:18.340463 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c958c2a51b71743200ec708f6f19ac9c4925c356ed64200cb45820406164df1d-rootfs.mount: Deactivated successfully. Mar 17 18:43:18.347562 env[1309]: time="2025-03-17T18:43:18.347505968Z" level=info msg="shim disconnected" id=c958c2a51b71743200ec708f6f19ac9c4925c356ed64200cb45820406164df1d Mar 17 18:43:18.347562 env[1309]: time="2025-03-17T18:43:18.347550032Z" level=warning msg="cleaning up after shim disconnected" id=c958c2a51b71743200ec708f6f19ac9c4925c356ed64200cb45820406164df1d namespace=k8s.io Mar 17 18:43:18.347562 env[1309]: time="2025-03-17T18:43:18.347558998Z" level=info msg="cleaning up dead shim" Mar 17 18:43:18.353176 env[1309]: time="2025-03-17T18:43:18.353139880Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:43:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4305 runtime=io.containerd.runc.v2\n" Mar 17 18:43:18.895308 kubelet[2216]: E0317 18:43:18.895283 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:18.907474 env[1309]: time="2025-03-17T18:43:18.907420513Z" level=info msg="CreateContainer within sandbox \"7197aa5ecbd63728ee6635c055143ccae64891b1c5e584ea1a829167d37fa4d7\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 17 18:43:18.919246 env[1309]: time="2025-03-17T18:43:18.919193222Z" level=info msg="CreateContainer within sandbox \"7197aa5ecbd63728ee6635c055143ccae64891b1c5e584ea1a829167d37fa4d7\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8911b6e55c7c1bb359818b52b8e613dd7adf4c2eee39822ad0507a7e186f39c9\"" Mar 17 18:43:18.919648 env[1309]: time="2025-03-17T18:43:18.919623713Z" level=info msg="StartContainer for \"8911b6e55c7c1bb359818b52b8e613dd7adf4c2eee39822ad0507a7e186f39c9\"" Mar 17 18:43:18.960498 env[1309]: time="2025-03-17T18:43:18.960443983Z" level=info msg="StartContainer for \"8911b6e55c7c1bb359818b52b8e613dd7adf4c2eee39822ad0507a7e186f39c9\" returns successfully" Mar 17 18:43:19.118096 env[1309]: time="2025-03-17T18:43:19.118051591Z" level=info msg="StopPodSandbox for \"92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3\"" Mar 17 18:43:19.119129 kubelet[2216]: I0317 18:43:19.119097 2216 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fbc69a3-8524-4440-a2d9-778d5b73de1b" path="/var/lib/kubelet/pods/8fbc69a3-8524-4440-a2d9-778d5b73de1b/volumes" Mar 17 18:43:19.199547 env[1309]: 2025-03-17 18:43:19.171 [INFO][4382] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3" Mar 17 18:43:19.199547 env[1309]: 2025-03-17 18:43:19.171 [INFO][4382] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3" iface="eth0" netns="/var/run/netns/cni-a0c4ca01-b666-5bc8-c0df-3ac5e29a0e18" Mar 17 18:43:19.199547 env[1309]: 2025-03-17 18:43:19.171 [INFO][4382] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3" iface="eth0" netns="/var/run/netns/cni-a0c4ca01-b666-5bc8-c0df-3ac5e29a0e18" Mar 17 18:43:19.199547 env[1309]: 2025-03-17 18:43:19.172 [INFO][4382] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3" iface="eth0" netns="/var/run/netns/cni-a0c4ca01-b666-5bc8-c0df-3ac5e29a0e18" Mar 17 18:43:19.199547 env[1309]: 2025-03-17 18:43:19.172 [INFO][4382] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3" Mar 17 18:43:19.199547 env[1309]: 2025-03-17 18:43:19.172 [INFO][4382] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3" Mar 17 18:43:19.199547 env[1309]: 2025-03-17 18:43:19.190 [INFO][4397] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3" HandleID="k8s-pod-network.92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3" Workload="localhost-k8s-calico--apiserver--9bd99c456--c2kg7-eth0" Mar 17 18:43:19.199547 env[1309]: 2025-03-17 18:43:19.191 [INFO][4397] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:43:19.199547 env[1309]: 2025-03-17 18:43:19.191 [INFO][4397] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:43:19.199547 env[1309]: 2025-03-17 18:43:19.195 [WARNING][4397] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3" HandleID="k8s-pod-network.92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3" Workload="localhost-k8s-calico--apiserver--9bd99c456--c2kg7-eth0" Mar 17 18:43:19.199547 env[1309]: 2025-03-17 18:43:19.195 [INFO][4397] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3" HandleID="k8s-pod-network.92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3" Workload="localhost-k8s-calico--apiserver--9bd99c456--c2kg7-eth0" Mar 17 18:43:19.199547 env[1309]: 2025-03-17 18:43:19.196 [INFO][4397] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:43:19.199547 env[1309]: 2025-03-17 18:43:19.197 [INFO][4382] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3" Mar 17 18:43:19.199547 env[1309]: time="2025-03-17T18:43:19.199470837Z" level=info msg="TearDown network for sandbox \"92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3\" successfully" Mar 17 18:43:19.199956 env[1309]: time="2025-03-17T18:43:19.199551632Z" level=info msg="StopPodSandbox for \"92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3\" returns successfully" Mar 17 18:43:19.200383 env[1309]: time="2025-03-17T18:43:19.200348221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9bd99c456-c2kg7,Uid:fe03e3e2-d761-40da-81d3-dd75baa1eeea,Namespace:calico-apiserver,Attempt:1,}" Mar 17 18:43:19.202262 systemd[1]: run-netns-cni\x2da0c4ca01\x2db666\x2d5bc8\x2dc0df\x2d3ac5e29a0e18.mount: Deactivated successfully. Mar 17 18:43:19.297902 systemd-networkd[1085]: cali012642b78b8: Link UP Mar 17 18:43:19.300288 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Mar 17 18:43:19.300438 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali012642b78b8: link becomes ready Mar 17 18:43:19.300404 systemd-networkd[1085]: cali012642b78b8: Gained carrier Mar 17 18:43:19.310929 env[1309]: 2025-03-17 18:43:19.226 [INFO][4408] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 18:43:19.310929 env[1309]: 2025-03-17 18:43:19.235 [INFO][4408] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--9bd99c456--c2kg7-eth0 calico-apiserver-9bd99c456- calico-apiserver fe03e3e2-d761-40da-81d3-dd75baa1eeea 1051 0 2025-03-17 18:42:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9bd99c456 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-9bd99c456-c2kg7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali012642b78b8 [] []}} ContainerID="21badf43d7dfa03f1316746d88e3371c09dfb0e61b18bfa7e1e45b85197b8e0d" Namespace="calico-apiserver" Pod="calico-apiserver-9bd99c456-c2kg7" WorkloadEndpoint="localhost-k8s-calico--apiserver--9bd99c456--c2kg7-" Mar 17 18:43:19.310929 env[1309]: 2025-03-17 18:43:19.235 [INFO][4408] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="21badf43d7dfa03f1316746d88e3371c09dfb0e61b18bfa7e1e45b85197b8e0d" Namespace="calico-apiserver" Pod="calico-apiserver-9bd99c456-c2kg7" WorkloadEndpoint="localhost-k8s-calico--apiserver--9bd99c456--c2kg7-eth0" Mar 17 18:43:19.310929 env[1309]: 2025-03-17 18:43:19.258 [INFO][4423] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="21badf43d7dfa03f1316746d88e3371c09dfb0e61b18bfa7e1e45b85197b8e0d" HandleID="k8s-pod-network.21badf43d7dfa03f1316746d88e3371c09dfb0e61b18bfa7e1e45b85197b8e0d" Workload="localhost-k8s-calico--apiserver--9bd99c456--c2kg7-eth0" Mar 17 18:43:19.310929 env[1309]: 2025-03-17 18:43:19.265 [INFO][4423] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="21badf43d7dfa03f1316746d88e3371c09dfb0e61b18bfa7e1e45b85197b8e0d" HandleID="k8s-pod-network.21badf43d7dfa03f1316746d88e3371c09dfb0e61b18bfa7e1e45b85197b8e0d" Workload="localhost-k8s-calico--apiserver--9bd99c456--c2kg7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dd3c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-9bd99c456-c2kg7", "timestamp":"2025-03-17 18:43:19.258501923 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 18:43:19.310929 env[1309]: 2025-03-17 18:43:19.265 [INFO][4423] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:43:19.310929 env[1309]: 2025-03-17 18:43:19.265 [INFO][4423] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:43:19.310929 env[1309]: 2025-03-17 18:43:19.265 [INFO][4423] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 18:43:19.310929 env[1309]: 2025-03-17 18:43:19.266 [INFO][4423] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.21badf43d7dfa03f1316746d88e3371c09dfb0e61b18bfa7e1e45b85197b8e0d" host="localhost" Mar 17 18:43:19.310929 env[1309]: 2025-03-17 18:43:19.270 [INFO][4423] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 18:43:19.310929 env[1309]: 2025-03-17 18:43:19.273 [INFO][4423] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 18:43:19.310929 env[1309]: 2025-03-17 18:43:19.274 [INFO][4423] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 18:43:19.310929 env[1309]: 2025-03-17 18:43:19.276 [INFO][4423] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 18:43:19.310929 env[1309]: 2025-03-17 18:43:19.276 [INFO][4423] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.21badf43d7dfa03f1316746d88e3371c09dfb0e61b18bfa7e1e45b85197b8e0d" host="localhost" Mar 17 18:43:19.310929 env[1309]: 2025-03-17 18:43:19.277 [INFO][4423] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.21badf43d7dfa03f1316746d88e3371c09dfb0e61b18bfa7e1e45b85197b8e0d Mar 17 18:43:19.310929 env[1309]: 2025-03-17 18:43:19.281 [INFO][4423] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.21badf43d7dfa03f1316746d88e3371c09dfb0e61b18bfa7e1e45b85197b8e0d" host="localhost" Mar 17 18:43:19.310929 env[1309]: 2025-03-17 18:43:19.287 [INFO][4423] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.21badf43d7dfa03f1316746d88e3371c09dfb0e61b18bfa7e1e45b85197b8e0d" host="localhost" Mar 17 18:43:19.310929 env[1309]: 2025-03-17 18:43:19.287 [INFO][4423] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.21badf43d7dfa03f1316746d88e3371c09dfb0e61b18bfa7e1e45b85197b8e0d" host="localhost" Mar 17 18:43:19.310929 env[1309]: 2025-03-17 18:43:19.287 [INFO][4423] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:43:19.310929 env[1309]: 2025-03-17 18:43:19.287 [INFO][4423] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="21badf43d7dfa03f1316746d88e3371c09dfb0e61b18bfa7e1e45b85197b8e0d" HandleID="k8s-pod-network.21badf43d7dfa03f1316746d88e3371c09dfb0e61b18bfa7e1e45b85197b8e0d" Workload="localhost-k8s-calico--apiserver--9bd99c456--c2kg7-eth0" Mar 17 18:43:19.311524 env[1309]: 2025-03-17 18:43:19.289 [INFO][4408] cni-plugin/k8s.go 386: Populated endpoint ContainerID="21badf43d7dfa03f1316746d88e3371c09dfb0e61b18bfa7e1e45b85197b8e0d" Namespace="calico-apiserver" Pod="calico-apiserver-9bd99c456-c2kg7" WorkloadEndpoint="localhost-k8s-calico--apiserver--9bd99c456--c2kg7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9bd99c456--c2kg7-eth0", GenerateName:"calico-apiserver-9bd99c456-", Namespace:"calico-apiserver", SelfLink:"", UID:"fe03e3e2-d761-40da-81d3-dd75baa1eeea", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 42, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9bd99c456", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-9bd99c456-c2kg7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali012642b78b8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:43:19.311524 env[1309]: 2025-03-17 18:43:19.289 [INFO][4408] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="21badf43d7dfa03f1316746d88e3371c09dfb0e61b18bfa7e1e45b85197b8e0d" Namespace="calico-apiserver" Pod="calico-apiserver-9bd99c456-c2kg7" WorkloadEndpoint="localhost-k8s-calico--apiserver--9bd99c456--c2kg7-eth0" Mar 17 18:43:19.311524 env[1309]: 2025-03-17 18:43:19.289 [INFO][4408] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali012642b78b8 ContainerID="21badf43d7dfa03f1316746d88e3371c09dfb0e61b18bfa7e1e45b85197b8e0d" Namespace="calico-apiserver" Pod="calico-apiserver-9bd99c456-c2kg7" WorkloadEndpoint="localhost-k8s-calico--apiserver--9bd99c456--c2kg7-eth0" Mar 17 18:43:19.311524 env[1309]: 2025-03-17 18:43:19.300 [INFO][4408] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="21badf43d7dfa03f1316746d88e3371c09dfb0e61b18bfa7e1e45b85197b8e0d" Namespace="calico-apiserver" Pod="calico-apiserver-9bd99c456-c2kg7" WorkloadEndpoint="localhost-k8s-calico--apiserver--9bd99c456--c2kg7-eth0" Mar 17 18:43:19.311524 env[1309]: 2025-03-17 18:43:19.301 [INFO][4408] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="21badf43d7dfa03f1316746d88e3371c09dfb0e61b18bfa7e1e45b85197b8e0d" Namespace="calico-apiserver" Pod="calico-apiserver-9bd99c456-c2kg7" WorkloadEndpoint="localhost-k8s-calico--apiserver--9bd99c456--c2kg7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9bd99c456--c2kg7-eth0", GenerateName:"calico-apiserver-9bd99c456-", Namespace:"calico-apiserver", SelfLink:"", UID:"fe03e3e2-d761-40da-81d3-dd75baa1eeea", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 42, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9bd99c456", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"21badf43d7dfa03f1316746d88e3371c09dfb0e61b18bfa7e1e45b85197b8e0d", Pod:"calico-apiserver-9bd99c456-c2kg7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali012642b78b8", MAC:"92:df:5c:2c:86:8a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:43:19.311524 env[1309]: 2025-03-17 18:43:19.308 [INFO][4408] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="21badf43d7dfa03f1316746d88e3371c09dfb0e61b18bfa7e1e45b85197b8e0d" Namespace="calico-apiserver" Pod="calico-apiserver-9bd99c456-c2kg7" WorkloadEndpoint="localhost-k8s-calico--apiserver--9bd99c456--c2kg7-eth0" Mar 17 18:43:19.321282 env[1309]: time="2025-03-17T18:43:19.321210445Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:43:19.321282 env[1309]: time="2025-03-17T18:43:19.321244169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:43:19.321282 env[1309]: time="2025-03-17T18:43:19.321253536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:43:19.321428 env[1309]: time="2025-03-17T18:43:19.321372854Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/21badf43d7dfa03f1316746d88e3371c09dfb0e61b18bfa7e1e45b85197b8e0d pid=4453 runtime=io.containerd.runc.v2 Mar 17 18:43:19.343604 systemd-resolved[1224]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 18:43:19.365902 env[1309]: time="2025-03-17T18:43:19.365850436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9bd99c456-c2kg7,Uid:fe03e3e2-d761-40da-81d3-dd75baa1eeea,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"21badf43d7dfa03f1316746d88e3371c09dfb0e61b18bfa7e1e45b85197b8e0d\"" Mar 17 18:43:19.367594 env[1309]: time="2025-03-17T18:43:19.367548706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Mar 17 18:43:19.901477 kubelet[2216]: E0317 18:43:19.901448 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:19.912136 kubelet[2216]: I0317 18:43:19.912075 2216 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-m8c5b" podStartSLOduration=2.912054891 podStartE2EDuration="2.912054891s" podCreationTimestamp="2025-03-17 18:43:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:43:19.91128924 +0000 UTC m=+84.874858286" watchObservedRunningTime="2025-03-17 18:43:19.912054891 +0000 UTC m=+84.875623937" Mar 17 18:43:20.117551 env[1309]: time="2025-03-17T18:43:20.117503847Z" level=info msg="StopPodSandbox for \"2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7\"" Mar 17 18:43:20.190176 env[1309]: 2025-03-17 18:43:20.153 [INFO][4526] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7" Mar 17 18:43:20.190176 env[1309]: 2025-03-17 18:43:20.153 [INFO][4526] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7" iface="eth0" netns="/var/run/netns/cni-720479e3-1ab7-77fc-13e3-d615b52be019" Mar 17 18:43:20.190176 env[1309]: 2025-03-17 18:43:20.154 [INFO][4526] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7" iface="eth0" netns="/var/run/netns/cni-720479e3-1ab7-77fc-13e3-d615b52be019" Mar 17 18:43:20.190176 env[1309]: 2025-03-17 18:43:20.154 [INFO][4526] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7" iface="eth0" netns="/var/run/netns/cni-720479e3-1ab7-77fc-13e3-d615b52be019" Mar 17 18:43:20.190176 env[1309]: 2025-03-17 18:43:20.154 [INFO][4526] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7" Mar 17 18:43:20.190176 env[1309]: 2025-03-17 18:43:20.154 [INFO][4526] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7" Mar 17 18:43:20.190176 env[1309]: 2025-03-17 18:43:20.172 [INFO][4533] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7" HandleID="k8s-pod-network.2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7" Workload="localhost-k8s-calico--apiserver--9bd99c456--2chqm-eth0" Mar 17 18:43:20.190176 env[1309]: 2025-03-17 18:43:20.172 [INFO][4533] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:43:20.190176 env[1309]: 2025-03-17 18:43:20.172 [INFO][4533] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:43:20.190176 env[1309]: 2025-03-17 18:43:20.177 [WARNING][4533] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7" HandleID="k8s-pod-network.2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7" Workload="localhost-k8s-calico--apiserver--9bd99c456--2chqm-eth0" Mar 17 18:43:20.190176 env[1309]: 2025-03-17 18:43:20.177 [INFO][4533] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7" HandleID="k8s-pod-network.2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7" Workload="localhost-k8s-calico--apiserver--9bd99c456--2chqm-eth0" Mar 17 18:43:20.190176 env[1309]: 2025-03-17 18:43:20.180 [INFO][4533] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:43:20.190176 env[1309]: 2025-03-17 18:43:20.183 [INFO][4526] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7" Mar 17 18:43:20.194000 audit[4574]: AVC avc: denied { write } for pid=4574 comm="tee" name="fd" dev="proc" ino=26286 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Mar 17 18:43:20.194000 audit[4574]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdb7f21a2f a2=241 a3=1b6 items=1 ppid=4549 pid=4574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:20.194000 audit: CWD cwd="/etc/service/enabled/bird/log" Mar 17 18:43:20.194000 audit: PATH item=0 name="/dev/fd/63" inode=26281 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:43:20.194000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Mar 17 18:43:20.199606 systemd[1]: run-netns-cni\x2d720479e3\x2d1ab7\x2d77fc\x2d13e3\x2dd615b52be019.mount: Deactivated successfully. Mar 17 18:43:20.200071 env[1309]: time="2025-03-17T18:43:20.199949493Z" level=info msg="TearDown network for sandbox \"2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7\" successfully" Mar 17 18:43:20.200071 env[1309]: time="2025-03-17T18:43:20.200004148Z" level=info msg="StopPodSandbox for \"2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7\" returns successfully" Mar 17 18:43:20.200693 env[1309]: time="2025-03-17T18:43:20.200661341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9bd99c456-2chqm,Uid:e63620b8-dbf8-4096-bfe9-5ecbfa441daf,Namespace:calico-apiserver,Attempt:1,}" Mar 17 18:43:20.201000 audit[4596]: AVC avc: denied { write } for pid=4596 comm="tee" name="fd" dev="proc" ino=28748 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Mar 17 18:43:20.201000 audit[4596]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffcb42da30 a2=241 a3=1b6 items=1 ppid=4553 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:20.201000 audit: CWD cwd="/etc/service/enabled/cni/log" Mar 17 18:43:20.201000 audit: PATH item=0 name="/dev/fd/63" inode=27263 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:43:20.201000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Mar 17 18:43:20.203000 audit[4601]: AVC avc: denied { write } for pid=4601 comm="tee" name="fd" dev="proc" ino=28128 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Mar 17 18:43:20.203000 audit[4601]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff0b793a1e a2=241 a3=1b6 items=1 ppid=4556 pid=4601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:20.203000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Mar 17 18:43:20.203000 audit: PATH item=0 name="/dev/fd/63" inode=26295 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:43:20.203000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Mar 17 18:43:20.215000 audit[4618]: AVC avc: denied { write } for pid=4618 comm="tee" name="fd" dev="proc" ino=27270 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Mar 17 18:43:20.215000 audit[4618]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe921daa2e a2=241 a3=1b6 items=1 ppid=4554 pid=4618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:20.215000 audit: CWD cwd="/etc/service/enabled/confd/log" Mar 17 18:43:20.215000 audit: PATH item=0 name="/dev/fd/63" inode=28755 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:43:20.215000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Mar 17 18:43:20.225000 audit[4604]: AVC avc: denied { write } for pid=4604 comm="tee" name="fd" dev="proc" ino=28758 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Mar 17 18:43:20.225000 audit[4604]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff6931fa2e a2=241 a3=1b6 items=1 ppid=4548 pid=4604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:20.225000 audit: CWD cwd="/etc/service/enabled/bird6/log" Mar 17 18:43:20.225000 audit: PATH item=0 name="/dev/fd/63" inode=26296 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:43:20.225000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Mar 17 18:43:20.236000 audit[4613]: AVC avc: denied { write } for pid=4613 comm="tee" name="fd" dev="proc" ino=27279 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Mar 17 18:43:20.236000 audit[4613]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffd0686a1f a2=241 a3=1b6 items=1 ppid=4550 pid=4613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:20.236000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Mar 17 18:43:20.236000 audit: PATH item=0 name="/dev/fd/63" inode=28754 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:43:20.236000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Mar 17 18:43:20.307000 audit[4642]: AVC avc: denied { write } for pid=4642 comm="tee" name="fd" dev="proc" ino=28158 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Mar 17 18:43:20.307000 audit[4642]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe43aeca2e a2=241 a3=1b6 items=1 ppid=4566 pid=4642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:20.307000 audit: CWD cwd="/etc/service/enabled/felix/log" Mar 17 18:43:20.307000 audit: PATH item=0 name="/dev/fd/63" inode=27284 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:43:20.307000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Mar 17 18:43:20.330000 audit[4670]: AVC avc: denied { bpf } for pid=4670 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.330000 audit[4670]: AVC avc: denied { bpf } for pid=4670 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.330000 audit[4670]: AVC avc: denied { perfmon } for pid=4670 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.330000 audit[4670]: AVC avc: denied { perfmon } for pid=4670 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.330000 audit[4670]: AVC avc: denied { perfmon } for pid=4670 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.330000 audit[4670]: AVC avc: denied { perfmon } for pid=4670 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.330000 audit[4670]: AVC avc: denied { perfmon } for pid=4670 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.330000 audit[4670]: AVC avc: denied { bpf } for pid=4670 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.330000 audit[4670]: AVC avc: denied { bpf } for pid=4670 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.330000 audit: BPF prog-id=10 op=LOAD Mar 17 18:43:20.330000 audit[4670]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe38ab0430 a2=98 a3=3 items=0 ppid=4571 pid=4670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:20.330000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:43:20.331000 audit: BPF prog-id=10 op=UNLOAD Mar 17 18:43:20.331000 audit[4670]: AVC avc: denied { bpf } for pid=4670 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.331000 audit[4670]: AVC avc: denied { bpf } for pid=4670 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.331000 audit[4670]: AVC avc: denied { perfmon } for pid=4670 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.331000 audit[4670]: AVC avc: denied { perfmon } for pid=4670 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.331000 audit[4670]: AVC avc: denied { perfmon } for pid=4670 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.331000 audit[4670]: AVC avc: denied { perfmon } for pid=4670 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.331000 audit[4670]: AVC avc: denied { perfmon } for pid=4670 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.331000 audit[4670]: AVC avc: denied { bpf } for pid=4670 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.331000 audit[4670]: AVC avc: denied { bpf } for pid=4670 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.331000 audit: BPF prog-id=11 op=LOAD Mar 17 18:43:20.331000 audit[4670]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe38ab0210 a2=74 a3=540051 items=0 ppid=4571 pid=4670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:20.331000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:43:20.331000 audit: BPF prog-id=11 op=UNLOAD Mar 17 18:43:20.331000 audit[4670]: AVC avc: denied { bpf } for pid=4670 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.331000 audit[4670]: AVC avc: denied { bpf } for pid=4670 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.331000 audit[4670]: AVC avc: denied { perfmon } for pid=4670 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.331000 audit[4670]: AVC avc: denied { perfmon } for pid=4670 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.331000 audit[4670]: AVC avc: denied { perfmon } for pid=4670 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.331000 audit[4670]: AVC avc: denied { perfmon } for pid=4670 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.331000 audit[4670]: AVC avc: denied { perfmon } for pid=4670 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.331000 audit[4670]: AVC avc: denied { bpf } for pid=4670 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.331000 audit[4670]: AVC avc: denied { bpf } for pid=4670 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.331000 audit: BPF prog-id=12 op=LOAD Mar 17 18:43:20.331000 audit[4670]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe38ab0240 a2=94 a3=2 items=0 ppid=4571 pid=4670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:20.331000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:43:20.331000 audit: BPF prog-id=12 op=UNLOAD Mar 17 18:43:20.439000 audit[4670]: AVC avc: denied { bpf } for pid=4670 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.439000 audit[4670]: AVC avc: denied { bpf } for pid=4670 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.439000 audit[4670]: AVC avc: denied { perfmon } for pid=4670 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.439000 audit[4670]: AVC avc: denied { perfmon } for pid=4670 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.439000 audit[4670]: AVC avc: denied { perfmon } for pid=4670 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.439000 audit[4670]: AVC avc: denied { perfmon } for pid=4670 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.439000 audit[4670]: AVC avc: denied { perfmon } for pid=4670 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.439000 audit[4670]: AVC avc: denied { bpf } for pid=4670 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.439000 audit[4670]: AVC avc: denied { bpf } for pid=4670 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.439000 audit: BPF prog-id=13 op=LOAD Mar 17 18:43:20.439000 audit[4670]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe38ab0100 a2=40 a3=1 items=0 ppid=4571 pid=4670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:20.439000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:43:20.439000 audit: BPF prog-id=13 op=UNLOAD Mar 17 18:43:20.439000 audit[4670]: AVC avc: denied { perfmon } for pid=4670 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.439000 audit[4670]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffe38ab01d0 a2=50 a3=7ffe38ab02b0 items=0 ppid=4571 pid=4670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:20.439000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:43:20.446000 audit[4670]: AVC avc: denied { bpf } for pid=4670 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.446000 audit[4670]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe38ab0110 a2=28 a3=0 items=0 ppid=4571 pid=4670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:20.446000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:43:20.446000 audit[4670]: AVC avc: denied { bpf } for pid=4670 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.446000 audit[4670]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe38ab0140 a2=28 a3=0 items=0 ppid=4571 pid=4670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:20.446000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:43:20.446000 audit[4670]: AVC avc: denied { bpf } for pid=4670 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.446000 audit[4670]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe38ab0050 a2=28 a3=0 items=0 ppid=4571 pid=4670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:20.446000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:43:20.446000 audit[4670]: AVC avc: denied { bpf } for pid=4670 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.446000 audit[4670]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe38ab0160 a2=28 a3=0 items=0 ppid=4571 pid=4670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:20.446000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:43:20.446000 audit[4670]: AVC avc: denied { bpf } for pid=4670 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.446000 audit[4670]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe38ab0140 a2=28 a3=0 items=0 ppid=4571 pid=4670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:20.446000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:43:20.446000 audit[4670]: AVC avc: denied { bpf } for pid=4670 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.446000 audit[4670]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe38ab0130 a2=28 a3=0 items=0 ppid=4571 pid=4670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:20.446000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:43:20.446000 audit[4670]: AVC avc: denied { bpf } for pid=4670 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.446000 audit[4670]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe38ab0160 a2=28 a3=0 items=0 ppid=4571 pid=4670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:20.446000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:43:20.446000 audit[4670]: AVC avc: denied { bpf } for pid=4670 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.446000 audit[4670]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe38ab0140 a2=28 a3=0 items=0 ppid=4571 pid=4670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:20.446000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:43:20.446000 audit[4670]: AVC avc: denied { bpf } for pid=4670 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.446000 audit[4670]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe38ab0160 a2=28 a3=0 items=0 ppid=4571 pid=4670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:20.446000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:43:20.446000 audit[4670]: AVC avc: denied { bpf } for pid=4670 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.446000 audit[4670]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe38ab0130 a2=28 a3=0 items=0 ppid=4571 pid=4670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:20.446000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:43:20.446000 audit[4670]: AVC avc: denied { bpf } for pid=4670 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.446000 audit[4670]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe38ab01a0 a2=28 a3=0 items=0 ppid=4571 pid=4670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:20.446000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:43:20.446000 audit[4670]: AVC avc: denied { perfmon } for pid=4670 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.446000 audit[4670]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe38aaff50 a2=50 a3=1 items=0 ppid=4571 pid=4670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:20.446000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:43:20.446000 audit[4670]: AVC avc: denied { bpf } for pid=4670 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.446000 audit[4670]: AVC avc: denied { bpf } for pid=4670 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.446000 audit[4670]: AVC avc: denied { perfmon } for pid=4670 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.446000 audit[4670]: AVC avc: denied { perfmon } for pid=4670 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.446000 audit[4670]: AVC avc: denied { perfmon } for pid=4670 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.446000 audit[4670]: AVC avc: denied { perfmon } for pid=4670 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.446000 audit[4670]: AVC avc: denied { perfmon } for pid=4670 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.446000 audit[4670]: AVC avc: denied { bpf } for pid=4670 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.446000 audit[4670]: AVC avc: denied { bpf } for pid=4670 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.446000 audit: BPF prog-id=14 op=LOAD Mar 17 18:43:20.446000 audit[4670]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe38aaff50 a2=94 a3=5 items=0 ppid=4571 pid=4670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:20.446000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:43:20.447000 audit: BPF prog-id=14 op=UNLOAD Mar 17 18:43:20.447000 audit[4670]: AVC avc: denied { perfmon } for pid=4670 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.447000 audit[4670]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe38ab0000 a2=50 a3=1 items=0 ppid=4571 pid=4670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:20.447000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:43:20.447000 audit[4670]: AVC avc: denied { bpf } for pid=4670 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.447000 audit[4670]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffe38ab0120 a2=4 a3=38 items=0 ppid=4571 pid=4670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:20.447000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:43:20.447000 audit[4670]: AVC avc: denied { bpf } for pid=4670 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.447000 audit[4670]: AVC avc: denied { bpf } for pid=4670 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.447000 audit[4670]: AVC avc: denied { perfmon } for pid=4670 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.447000 audit[4670]: AVC avc: denied { bpf } for pid=4670 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.447000 audit[4670]: AVC avc: denied { perfmon } for pid=4670 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.447000 audit[4670]: AVC avc: denied { perfmon } for pid=4670 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.447000 audit[4670]: AVC avc: denied { perfmon } for pid=4670 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.447000 audit[4670]: AVC avc: denied { perfmon } for pid=4670 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.447000 audit[4670]: AVC avc: denied { perfmon } for pid=4670 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.447000 audit[4670]: AVC avc: denied { bpf } for pid=4670 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.447000 audit[4670]: AVC avc: denied { confidentiality } for pid=4670 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Mar 17 18:43:20.447000 audit[4670]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe38ab0170 a2=94 a3=6 items=0 ppid=4571 pid=4670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:20.447000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:43:20.447000 audit[4670]: AVC avc: denied { bpf } for pid=4670 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.447000 audit[4670]: AVC avc: denied { bpf } for pid=4670 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.447000 audit[4670]: AVC avc: denied { perfmon } for pid=4670 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.447000 audit[4670]: AVC avc: denied { bpf } for pid=4670 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.447000 audit[4670]: AVC avc: denied { perfmon } for pid=4670 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.447000 audit[4670]: AVC avc: denied { perfmon } for pid=4670 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.447000 audit[4670]: AVC avc: denied { perfmon } for pid=4670 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.447000 audit[4670]: AVC avc: denied { perfmon } for pid=4670 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.447000 audit[4670]: AVC avc: denied { perfmon } for pid=4670 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.447000 audit[4670]: AVC avc: denied { bpf } for pid=4670 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.447000 audit[4670]: AVC avc: denied { confidentiality } for pid=4670 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Mar 17 18:43:20.447000 audit[4670]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe38aaf920 a2=94 a3=83 items=0 ppid=4571 pid=4670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:20.447000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:43:20.447000 audit[4670]: AVC avc: denied { bpf } for pid=4670 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.447000 audit[4670]: AVC avc: denied { bpf } for pid=4670 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.447000 audit[4670]: AVC avc: denied { perfmon } for pid=4670 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.447000 audit[4670]: AVC avc: denied { bpf } for pid=4670 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.447000 audit[4670]: AVC avc: denied { perfmon } for pid=4670 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.447000 audit[4670]: AVC avc: denied { perfmon } for pid=4670 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.447000 audit[4670]: AVC avc: denied { perfmon } for pid=4670 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.447000 audit[4670]: AVC avc: denied { perfmon } for pid=4670 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.447000 audit[4670]: AVC avc: denied { perfmon } for pid=4670 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.447000 audit[4670]: AVC avc: denied { bpf } for pid=4670 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:20.447000 audit[4670]: AVC avc: denied { confidentiality } for pid=4670 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Mar 17 18:43:20.447000 audit[4670]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe38aaf920 a2=94 a3=83 items=0 ppid=4571 pid=4670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:20.447000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:43:20.611002 systemd-networkd[1085]: cali012642b78b8: Gained IPv6LL Mar 17 18:43:20.904538 kubelet[2216]: E0317 18:43:20.904499 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:20.968322 systemd[1]: run-containerd-runc-k8s.io-8911b6e55c7c1bb359818b52b8e613dd7adf4c2eee39822ad0507a7e186f39c9-runc.z6iUev.mount: Deactivated successfully. Mar 17 18:43:21.123197 systemd-networkd[1085]: cali777a0388350: Link UP Mar 17 18:43:21.125731 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Mar 17 18:43:21.125806 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali777a0388350: link becomes ready Mar 17 18:43:21.125914 systemd-networkd[1085]: cali777a0388350: Gained carrier Mar 17 18:43:21.135909 kernel: kauditd_printk_skb: 185 callbacks suppressed Mar 17 18:43:21.136052 kernel: audit: type=1400 audit(1742237001.130:408): avc: denied { bpf } for pid=4713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.130000 audit[4713]: AVC avc: denied { bpf } for pid=4713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.139792 kernel: audit: type=1400 audit(1742237001.130:408): avc: denied { bpf } for pid=4713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.130000 audit[4713]: AVC avc: denied { bpf } for pid=4713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.130000 audit[4713]: AVC avc: denied { perfmon } for pid=4713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.148308 kernel: audit: type=1400 audit(1742237001.130:408): avc: denied { perfmon } for pid=4713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.148385 kernel: audit: type=1400 audit(1742237001.130:408): avc: denied { perfmon } for pid=4713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.130000 audit[4713]: AVC avc: denied { perfmon } for pid=4713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.166163 kernel: audit: type=1400 audit(1742237001.130:408): avc: denied { perfmon } for pid=4713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.130000 audit[4713]: AVC avc: denied { perfmon } for pid=4713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.170818 kernel: audit: type=1400 audit(1742237001.130:408): avc: denied { perfmon } for pid=4713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.130000 audit[4713]: AVC avc: denied { perfmon } for pid=4713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.168960 systemd[1]: Started sshd@18-10.0.0.81:22-10.0.0.1:36298.service. Mar 17 18:43:21.130000 audit[4713]: AVC avc: denied { perfmon } for pid=4713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.130000 audit[4713]: AVC avc: denied { bpf } for pid=4713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.177640 kernel: audit: type=1400 audit(1742237001.130:408): avc: denied { perfmon } for pid=4713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.177703 kernel: audit: type=1400 audit(1742237001.130:408): avc: denied { bpf } for pid=4713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.180943 kernel: audit: type=1400 audit(1742237001.130:408): avc: denied { bpf } for pid=4713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.130000 audit[4713]: AVC avc: denied { bpf } for pid=4713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.182097 kernel: audit: type=1334 audit(1742237001.130:408): prog-id=15 op=LOAD Mar 17 18:43:21.130000 audit: BPF prog-id=15 op=LOAD Mar 17 18:43:21.130000 audit[4713]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcbe029f60 a2=98 a3=1999999999999999 items=0 ppid=4571 pid=4713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.130000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Mar 17 18:43:21.135000 audit: BPF prog-id=15 op=UNLOAD Mar 17 18:43:21.135000 audit[4713]: AVC avc: denied { bpf } for pid=4713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.135000 audit[4713]: AVC avc: denied { bpf } for pid=4713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.135000 audit[4713]: AVC avc: denied { perfmon } for pid=4713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.135000 audit[4713]: AVC avc: denied { perfmon } for pid=4713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.135000 audit[4713]: AVC avc: denied { perfmon } for pid=4713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.135000 audit[4713]: AVC avc: denied { perfmon } for pid=4713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.135000 audit[4713]: AVC avc: denied { perfmon } for pid=4713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.135000 audit[4713]: AVC avc: denied { bpf } for pid=4713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.135000 audit[4713]: AVC avc: denied { bpf } for pid=4713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.135000 audit: BPF prog-id=16 op=LOAD Mar 17 18:43:21.135000 audit[4713]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcbe029e40 a2=74 a3=ffff items=0 ppid=4571 pid=4713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.135000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Mar 17 18:43:21.135000 audit: BPF prog-id=16 op=UNLOAD Mar 17 18:43:21.135000 audit[4713]: AVC avc: denied { bpf } for pid=4713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.135000 audit[4713]: AVC avc: denied { bpf } for pid=4713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.135000 audit[4713]: AVC avc: denied { perfmon } for pid=4713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.135000 audit[4713]: AVC avc: denied { perfmon } for pid=4713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.135000 audit[4713]: AVC avc: denied { perfmon } for pid=4713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.135000 audit[4713]: AVC avc: denied { perfmon } for pid=4713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.135000 audit[4713]: AVC avc: denied { perfmon } for pid=4713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.135000 audit[4713]: AVC avc: denied { bpf } for pid=4713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.135000 audit[4713]: AVC avc: denied { bpf } for pid=4713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.135000 audit: BPF prog-id=17 op=LOAD Mar 17 18:43:21.135000 audit[4713]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcbe029e80 a2=40 a3=7ffcbe02a060 items=0 ppid=4571 pid=4713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.135000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Mar 17 18:43:21.135000 audit: BPF prog-id=17 op=UNLOAD Mar 17 18:43:21.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.81:22-10.0.0.1:36298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:43:21.226538 env[1309]: 2025-03-17 18:43:20.265 [INFO][4622] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 18:43:21.226538 env[1309]: 2025-03-17 18:43:20.273 [INFO][4622] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--9bd99c456--2chqm-eth0 calico-apiserver-9bd99c456- calico-apiserver e63620b8-dbf8-4096-bfe9-5ecbfa441daf 1074 0 2025-03-17 18:42:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9bd99c456 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-9bd99c456-2chqm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali777a0388350 [] []}} ContainerID="aee38d0fdbbe963365defeea3ac1a6fbbaff7136cfc1793e76a9346a972118c9" Namespace="calico-apiserver" Pod="calico-apiserver-9bd99c456-2chqm" WorkloadEndpoint="localhost-k8s-calico--apiserver--9bd99c456--2chqm-" Mar 17 18:43:21.226538 env[1309]: 2025-03-17 18:43:20.273 [INFO][4622] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="aee38d0fdbbe963365defeea3ac1a6fbbaff7136cfc1793e76a9346a972118c9" Namespace="calico-apiserver" Pod="calico-apiserver-9bd99c456-2chqm" WorkloadEndpoint="localhost-k8s-calico--apiserver--9bd99c456--2chqm-eth0" Mar 17 18:43:21.226538 env[1309]: 2025-03-17 18:43:20.301 [INFO][4647] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aee38d0fdbbe963365defeea3ac1a6fbbaff7136cfc1793e76a9346a972118c9" HandleID="k8s-pod-network.aee38d0fdbbe963365defeea3ac1a6fbbaff7136cfc1793e76a9346a972118c9" Workload="localhost-k8s-calico--apiserver--9bd99c456--2chqm-eth0" Mar 17 18:43:21.226538 env[1309]: 2025-03-17 18:43:20.309 [INFO][4647] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="aee38d0fdbbe963365defeea3ac1a6fbbaff7136cfc1793e76a9346a972118c9" HandleID="k8s-pod-network.aee38d0fdbbe963365defeea3ac1a6fbbaff7136cfc1793e76a9346a972118c9" Workload="localhost-k8s-calico--apiserver--9bd99c456--2chqm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f4fe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-9bd99c456-2chqm", "timestamp":"2025-03-17 18:43:20.301770598 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 18:43:21.226538 env[1309]: 2025-03-17 18:43:20.309 [INFO][4647] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:43:21.226538 env[1309]: 2025-03-17 18:43:20.309 [INFO][4647] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:43:21.226538 env[1309]: 2025-03-17 18:43:20.309 [INFO][4647] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 18:43:21.226538 env[1309]: 2025-03-17 18:43:20.310 [INFO][4647] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.aee38d0fdbbe963365defeea3ac1a6fbbaff7136cfc1793e76a9346a972118c9" host="localhost" Mar 17 18:43:21.226538 env[1309]: 2025-03-17 18:43:20.313 [INFO][4647] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 18:43:21.226538 env[1309]: 2025-03-17 18:43:20.319 [INFO][4647] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 18:43:21.226538 env[1309]: 2025-03-17 18:43:20.320 [INFO][4647] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 18:43:21.226538 env[1309]: 2025-03-17 18:43:20.324 [INFO][4647] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 18:43:21.226538 env[1309]: 2025-03-17 18:43:20.324 [INFO][4647] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.aee38d0fdbbe963365defeea3ac1a6fbbaff7136cfc1793e76a9346a972118c9" host="localhost" Mar 17 18:43:21.226538 env[1309]: 2025-03-17 18:43:20.325 [INFO][4647] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.aee38d0fdbbe963365defeea3ac1a6fbbaff7136cfc1793e76a9346a972118c9 Mar 17 18:43:21.226538 env[1309]: 2025-03-17 18:43:20.367 [INFO][4647] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.aee38d0fdbbe963365defeea3ac1a6fbbaff7136cfc1793e76a9346a972118c9" host="localhost" Mar 17 18:43:21.226538 env[1309]: 2025-03-17 18:43:20.383 [INFO][4647] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.aee38d0fdbbe963365defeea3ac1a6fbbaff7136cfc1793e76a9346a972118c9" host="localhost" Mar 17 18:43:21.226538 env[1309]: 2025-03-17 18:43:20.383 [INFO][4647] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.aee38d0fdbbe963365defeea3ac1a6fbbaff7136cfc1793e76a9346a972118c9" host="localhost" Mar 17 18:43:21.226538 env[1309]: 2025-03-17 18:43:20.383 [INFO][4647] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:43:21.226538 env[1309]: 2025-03-17 18:43:20.383 [INFO][4647] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="aee38d0fdbbe963365defeea3ac1a6fbbaff7136cfc1793e76a9346a972118c9" HandleID="k8s-pod-network.aee38d0fdbbe963365defeea3ac1a6fbbaff7136cfc1793e76a9346a972118c9" Workload="localhost-k8s-calico--apiserver--9bd99c456--2chqm-eth0" Mar 17 18:43:21.227395 env[1309]: 2025-03-17 18:43:20.385 [INFO][4622] cni-plugin/k8s.go 386: Populated endpoint ContainerID="aee38d0fdbbe963365defeea3ac1a6fbbaff7136cfc1793e76a9346a972118c9" Namespace="calico-apiserver" Pod="calico-apiserver-9bd99c456-2chqm" WorkloadEndpoint="localhost-k8s-calico--apiserver--9bd99c456--2chqm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9bd99c456--2chqm-eth0", GenerateName:"calico-apiserver-9bd99c456-", Namespace:"calico-apiserver", SelfLink:"", UID:"e63620b8-dbf8-4096-bfe9-5ecbfa441daf", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 42, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9bd99c456", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-9bd99c456-2chqm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali777a0388350", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:43:21.227395 env[1309]: 2025-03-17 18:43:20.385 [INFO][4622] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="aee38d0fdbbe963365defeea3ac1a6fbbaff7136cfc1793e76a9346a972118c9" Namespace="calico-apiserver" Pod="calico-apiserver-9bd99c456-2chqm" WorkloadEndpoint="localhost-k8s-calico--apiserver--9bd99c456--2chqm-eth0" Mar 17 18:43:21.227395 env[1309]: 2025-03-17 18:43:20.385 [INFO][4622] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali777a0388350 ContainerID="aee38d0fdbbe963365defeea3ac1a6fbbaff7136cfc1793e76a9346a972118c9" Namespace="calico-apiserver" Pod="calico-apiserver-9bd99c456-2chqm" WorkloadEndpoint="localhost-k8s-calico--apiserver--9bd99c456--2chqm-eth0" Mar 17 18:43:21.227395 env[1309]: 2025-03-17 18:43:21.126 [INFO][4622] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aee38d0fdbbe963365defeea3ac1a6fbbaff7136cfc1793e76a9346a972118c9" Namespace="calico-apiserver" Pod="calico-apiserver-9bd99c456-2chqm" WorkloadEndpoint="localhost-k8s-calico--apiserver--9bd99c456--2chqm-eth0" Mar 17 18:43:21.227395 env[1309]: 2025-03-17 18:43:21.126 [INFO][4622] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="aee38d0fdbbe963365defeea3ac1a6fbbaff7136cfc1793e76a9346a972118c9" Namespace="calico-apiserver" Pod="calico-apiserver-9bd99c456-2chqm" WorkloadEndpoint="localhost-k8s-calico--apiserver--9bd99c456--2chqm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9bd99c456--2chqm-eth0", GenerateName:"calico-apiserver-9bd99c456-", Namespace:"calico-apiserver", SelfLink:"", UID:"e63620b8-dbf8-4096-bfe9-5ecbfa441daf", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 42, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9bd99c456", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aee38d0fdbbe963365defeea3ac1a6fbbaff7136cfc1793e76a9346a972118c9", Pod:"calico-apiserver-9bd99c456-2chqm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali777a0388350", MAC:"26:d4:9f:a7:43:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:43:21.227395 env[1309]: 2025-03-17 18:43:21.224 [INFO][4622] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="aee38d0fdbbe963365defeea3ac1a6fbbaff7136cfc1793e76a9346a972118c9" Namespace="calico-apiserver" Pod="calico-apiserver-9bd99c456-2chqm" WorkloadEndpoint="localhost-k8s-calico--apiserver--9bd99c456--2chqm-eth0" Mar 17 18:43:21.305427 systemd-networkd[1085]: vxlan.calico: Link UP Mar 17 18:43:21.305433 systemd-networkd[1085]: vxlan.calico: Gained carrier Mar 17 18:43:21.316000 audit[4755]: AVC avc: denied { bpf } for pid=4755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.316000 audit[4755]: AVC avc: denied { bpf } for pid=4755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.316000 audit[4755]: AVC avc: denied { perfmon } for pid=4755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.316000 audit[4755]: AVC avc: denied { perfmon } for pid=4755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.316000 audit[4755]: AVC avc: denied { perfmon } for pid=4755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.316000 audit[4755]: AVC avc: denied { perfmon } for pid=4755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.316000 audit[4755]: AVC avc: denied { perfmon } for pid=4755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.316000 audit[4755]: AVC avc: denied { bpf } for pid=4755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.316000 audit[4755]: AVC avc: denied { bpf } for pid=4755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.316000 audit: BPF prog-id=18 op=LOAD Mar 17 18:43:21.316000 audit[4755]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe03d6f3d0 a2=98 a3=ffffffff items=0 ppid=4571 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.316000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:43:21.316000 audit: BPF prog-id=18 op=UNLOAD Mar 17 18:43:21.316000 audit[4755]: AVC avc: denied { bpf } for pid=4755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.316000 audit[4755]: AVC avc: denied { bpf } for pid=4755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.316000 audit[4755]: AVC avc: denied { perfmon } for pid=4755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.316000 audit[4755]: AVC avc: denied { perfmon } for pid=4755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.316000 audit[4755]: AVC avc: denied { perfmon } for pid=4755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.316000 audit[4755]: AVC avc: denied { perfmon } for pid=4755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.316000 audit[4755]: AVC avc: denied { perfmon } for pid=4755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.316000 audit[4755]: AVC avc: denied { bpf } for pid=4755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.316000 audit[4755]: AVC avc: denied { bpf } for pid=4755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.316000 audit: BPF prog-id=19 op=LOAD Mar 17 18:43:21.316000 audit[4755]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe03d6f1e0 a2=74 a3=540051 items=0 ppid=4571 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.316000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:43:21.316000 audit: BPF prog-id=19 op=UNLOAD Mar 17 18:43:21.316000 audit[4755]: AVC avc: denied { bpf } for pid=4755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.316000 audit[4755]: AVC avc: denied { bpf } for pid=4755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.316000 audit[4755]: AVC avc: denied { perfmon } for pid=4755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.316000 audit[4755]: AVC avc: denied { perfmon } for pid=4755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.316000 audit[4755]: AVC avc: denied { perfmon } for pid=4755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.316000 audit[4755]: AVC avc: denied { perfmon } for pid=4755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.316000 audit[4755]: AVC avc: denied { perfmon } for pid=4755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.316000 audit[4755]: AVC avc: denied { bpf } for pid=4755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.316000 audit[4755]: AVC avc: denied { bpf } for pid=4755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.316000 audit: BPF prog-id=20 op=LOAD Mar 17 18:43:21.316000 audit[4755]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe03d6f210 a2=94 a3=2 items=0 ppid=4571 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.316000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:43:21.316000 audit: BPF prog-id=20 op=UNLOAD Mar 17 18:43:21.316000 audit[4755]: AVC avc: denied { bpf } for pid=4755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.316000 audit[4755]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe03d6f0e0 a2=28 a3=0 items=0 ppid=4571 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.316000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:43:21.317000 audit[4755]: AVC avc: denied { bpf } for pid=4755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.317000 audit[4755]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe03d6f110 a2=28 a3=0 items=0 ppid=4571 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.317000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:43:21.317000 audit[4755]: AVC avc: denied { bpf } for pid=4755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.317000 audit[4755]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe03d6f020 a2=28 a3=0 items=0 ppid=4571 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.317000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:43:21.317000 audit[4755]: AVC avc: denied { bpf } for pid=4755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.317000 audit[4755]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe03d6f130 a2=28 a3=0 items=0 ppid=4571 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.317000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:43:21.317000 audit[4755]: AVC avc: denied { bpf } for pid=4755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.317000 audit[4755]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe03d6f110 a2=28 a3=0 items=0 ppid=4571 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.317000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:43:21.317000 audit[4755]: AVC avc: denied { bpf } for pid=4755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.317000 audit[4755]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe03d6f100 a2=28 a3=0 items=0 ppid=4571 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.317000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:43:21.317000 audit[4755]: AVC avc: denied { bpf } for pid=4755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.317000 audit[4755]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe03d6f130 a2=28 a3=0 items=0 ppid=4571 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.317000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:43:21.317000 audit[4755]: AVC avc: denied { bpf } for pid=4755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.317000 audit[4755]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe03d6f110 a2=28 a3=0 items=0 ppid=4571 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.317000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:43:21.317000 audit[4755]: AVC avc: denied { bpf } for pid=4755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.317000 audit[4755]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe03d6f130 a2=28 a3=0 items=0 ppid=4571 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.317000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:43:21.317000 audit[4755]: AVC avc: denied { bpf } for pid=4755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.317000 audit[4755]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe03d6f100 a2=28 a3=0 items=0 ppid=4571 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.317000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:43:21.317000 audit[4755]: AVC avc: denied { bpf } for pid=4755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.317000 audit[4755]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe03d6f170 a2=28 a3=0 items=0 ppid=4571 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.317000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:43:21.317000 audit[4755]: AVC avc: denied { bpf } for pid=4755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.317000 audit[4755]: AVC avc: denied { bpf } for pid=4755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.317000 audit[4755]: AVC avc: denied { perfmon } for pid=4755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.317000 audit[4755]: AVC avc: denied { perfmon } for pid=4755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.317000 audit[4755]: AVC avc: denied { perfmon } for pid=4755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.317000 audit[4755]: AVC avc: denied { perfmon } for pid=4755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.317000 audit[4755]: AVC avc: denied { perfmon } for pid=4755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.317000 audit[4755]: AVC avc: denied { bpf } for pid=4755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.317000 audit[4755]: AVC avc: denied { bpf } for pid=4755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.317000 audit: BPF prog-id=21 op=LOAD Mar 17 18:43:21.317000 audit[4755]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe03d6efe0 a2=40 a3=0 items=0 ppid=4571 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.317000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:43:21.317000 audit: BPF prog-id=21 op=UNLOAD Mar 17 18:43:21.317000 audit[4755]: AVC avc: denied { bpf } for pid=4755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.317000 audit[4755]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffe03d6efd0 a2=50 a3=2800 items=0 ppid=4571 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.317000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:43:21.317000 audit[4755]: AVC avc: denied { bpf } for pid=4755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.317000 audit[4755]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffe03d6efd0 a2=50 a3=2800 items=0 ppid=4571 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.317000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:43:21.317000 audit[4755]: AVC avc: denied { bpf } for pid=4755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.317000 audit[4755]: AVC avc: denied { bpf } for pid=4755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.317000 audit[4755]: AVC avc: denied { bpf } for pid=4755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.317000 audit[4755]: AVC avc: denied { perfmon } for pid=4755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.317000 audit[4755]: AVC avc: denied { perfmon } for pid=4755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.317000 audit[4755]: AVC avc: denied { perfmon } for pid=4755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.317000 audit[4755]: AVC avc: denied { perfmon } for pid=4755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.317000 audit[4755]: AVC avc: denied { perfmon } for pid=4755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.317000 audit[4755]: AVC avc: denied { bpf } for pid=4755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.317000 audit[4755]: AVC avc: denied { bpf } for pid=4755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.317000 audit: BPF prog-id=22 op=LOAD Mar 17 18:43:21.317000 audit[4755]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe03d6e7f0 a2=94 a3=2 items=0 ppid=4571 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.317000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:43:21.317000 audit: BPF prog-id=22 op=UNLOAD Mar 17 18:43:21.317000 audit[4755]: AVC avc: denied { bpf } for pid=4755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.317000 audit[4755]: AVC avc: denied { bpf } for pid=4755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.317000 audit[4755]: AVC avc: denied { bpf } for pid=4755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.317000 audit[4755]: AVC avc: denied { perfmon } for pid=4755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.317000 audit[4755]: AVC avc: denied { perfmon } for pid=4755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.317000 audit[4755]: AVC avc: denied { perfmon } for pid=4755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.317000 audit[4755]: AVC avc: denied { perfmon } for pid=4755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.317000 audit[4755]: AVC avc: denied { perfmon } for pid=4755 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.317000 audit[4755]: AVC avc: denied { bpf } for pid=4755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.317000 audit[4755]: AVC avc: denied { bpf } for pid=4755 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.317000 audit: BPF prog-id=23 op=LOAD Mar 17 18:43:21.317000 audit[4755]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe03d6e8f0 a2=94 a3=2d items=0 ppid=4571 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.317000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:43:21.319000 audit[4759]: AVC avc: denied { bpf } for pid=4759 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.319000 audit[4759]: AVC avc: denied { bpf } for pid=4759 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.319000 audit[4759]: AVC avc: denied { perfmon } for pid=4759 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.319000 audit[4759]: AVC avc: denied { perfmon } for pid=4759 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.319000 audit[4759]: AVC avc: denied { perfmon } for pid=4759 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.319000 audit[4759]: AVC avc: denied { perfmon } for pid=4759 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.319000 audit[4759]: AVC avc: denied { perfmon } for pid=4759 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.319000 audit[4759]: AVC avc: denied { bpf } for pid=4759 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.319000 audit[4759]: AVC avc: denied { bpf } for pid=4759 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.319000 audit: BPF prog-id=24 op=LOAD Mar 17 18:43:21.319000 audit[4759]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd89893c90 a2=98 a3=0 items=0 ppid=4571 pid=4759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.319000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:43:21.319000 audit: BPF prog-id=24 op=UNLOAD Mar 17 18:43:21.319000 audit[4759]: AVC avc: denied { bpf } for pid=4759 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.319000 audit[4759]: AVC avc: denied { bpf } for pid=4759 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.319000 audit[4759]: AVC avc: denied { perfmon } for pid=4759 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.319000 audit[4759]: AVC avc: denied { perfmon } for pid=4759 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.319000 audit[4759]: AVC avc: denied { perfmon } for pid=4759 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.319000 audit[4759]: AVC avc: denied { perfmon } for pid=4759 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.319000 audit[4759]: AVC avc: denied { perfmon } for pid=4759 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.319000 audit[4759]: AVC avc: denied { bpf } for pid=4759 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.319000 audit[4759]: AVC avc: denied { bpf } for pid=4759 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.319000 audit: BPF prog-id=25 op=LOAD Mar 17 18:43:21.319000 audit[4759]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd89893a70 a2=74 a3=540051 items=0 ppid=4571 pid=4759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.319000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:43:21.319000 audit: BPF prog-id=25 op=UNLOAD Mar 17 18:43:21.319000 audit[4759]: AVC avc: denied { bpf } for pid=4759 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.319000 audit[4759]: AVC avc: denied { bpf } for pid=4759 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.319000 audit[4759]: AVC avc: denied { perfmon } for pid=4759 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.319000 audit[4759]: AVC avc: denied { perfmon } for pid=4759 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.319000 audit[4759]: AVC avc: denied { perfmon } for pid=4759 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.319000 audit[4759]: AVC avc: denied { perfmon } for pid=4759 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.319000 audit[4759]: AVC avc: denied { perfmon } for pid=4759 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.319000 audit[4759]: AVC avc: denied { bpf } for pid=4759 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.319000 audit[4759]: AVC avc: denied { bpf } for pid=4759 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.319000 audit: BPF prog-id=26 op=LOAD Mar 17 18:43:21.319000 audit[4759]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd89893aa0 a2=94 a3=2 items=0 ppid=4571 pid=4759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.319000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:43:21.319000 audit: BPF prog-id=26 op=UNLOAD Mar 17 18:43:21.363985 env[1309]: time="2025-03-17T18:43:21.363801170Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:43:21.363985 env[1309]: time="2025-03-17T18:43:21.363837699Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:43:21.363985 env[1309]: time="2025-03-17T18:43:21.363849492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:43:21.364370 env[1309]: time="2025-03-17T18:43:21.364299400Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/aee38d0fdbbe963365defeea3ac1a6fbbaff7136cfc1793e76a9346a972118c9 pid=4774 runtime=io.containerd.runc.v2 Mar 17 18:43:21.379000 audit[4724]: USER_ACCT pid=4724 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:21.380111 sshd[4724]: Accepted publickey for core from 10.0.0.1 port 36298 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:43:21.381000 audit[4724]: CRED_ACQ pid=4724 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:21.382000 audit[4724]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd54182a10 a2=3 a3=0 items=0 ppid=1 pid=4724 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.382000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:43:21.383738 sshd[4724]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:21.400716 systemd-logind[1293]: New session 19 of user core. Mar 17 18:43:21.402377 systemd[1]: run-containerd-runc-k8s.io-aee38d0fdbbe963365defeea3ac1a6fbbaff7136cfc1793e76a9346a972118c9-runc.8YuhFv.mount: Deactivated successfully. Mar 17 18:43:21.404206 systemd[1]: Started session-19.scope. Mar 17 18:43:21.411000 audit[4724]: USER_START pid=4724 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:21.413000 audit[4797]: CRED_ACQ pid=4797 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:21.422507 systemd-resolved[1224]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 18:43:21.469672 env[1309]: time="2025-03-17T18:43:21.469621766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9bd99c456-2chqm,Uid:e63620b8-dbf8-4096-bfe9-5ecbfa441daf,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"aee38d0fdbbe963365defeea3ac1a6fbbaff7136cfc1793e76a9346a972118c9\"" Mar 17 18:43:21.483000 audit[4759]: AVC avc: denied { bpf } for pid=4759 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.483000 audit[4759]: AVC avc: denied { bpf } for pid=4759 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.483000 audit[4759]: AVC avc: denied { perfmon } for pid=4759 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.483000 audit[4759]: AVC avc: denied { perfmon } for pid=4759 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.483000 audit[4759]: AVC avc: denied { perfmon } for pid=4759 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.483000 audit[4759]: AVC avc: denied { perfmon } for pid=4759 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.483000 audit[4759]: AVC avc: denied { perfmon } for pid=4759 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.483000 audit[4759]: AVC avc: denied { bpf } for pid=4759 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.483000 audit[4759]: AVC avc: denied { bpf } for pid=4759 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.483000 audit: BPF prog-id=27 op=LOAD Mar 17 18:43:21.483000 audit[4759]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd89893960 a2=40 a3=1 items=0 ppid=4571 pid=4759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.483000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:43:21.483000 audit: BPF prog-id=27 op=UNLOAD Mar 17 18:43:21.483000 audit[4759]: AVC avc: denied { perfmon } for pid=4759 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.483000 audit[4759]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffd89893a30 a2=50 a3=7ffd89893b10 items=0 ppid=4571 pid=4759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.483000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:43:21.490000 audit[4759]: AVC avc: denied { bpf } for pid=4759 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.490000 audit[4759]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd89893970 a2=28 a3=0 items=0 ppid=4571 pid=4759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.490000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:43:21.490000 audit[4759]: AVC avc: denied { bpf } for pid=4759 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.490000 audit[4759]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd898939a0 a2=28 a3=0 items=0 ppid=4571 pid=4759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.490000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:43:21.490000 audit[4759]: AVC avc: denied { bpf } for pid=4759 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.490000 audit[4759]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd898938b0 a2=28 a3=0 items=0 ppid=4571 pid=4759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.490000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:43:21.490000 audit[4759]: AVC avc: denied { bpf } for pid=4759 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.490000 audit[4759]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd898939c0 a2=28 a3=0 items=0 ppid=4571 pid=4759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.490000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:43:21.490000 audit[4759]: AVC avc: denied { bpf } for pid=4759 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.490000 audit[4759]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd898939a0 a2=28 a3=0 items=0 ppid=4571 pid=4759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.490000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:43:21.490000 audit[4759]: AVC avc: denied { bpf } for pid=4759 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.490000 audit[4759]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd89893990 a2=28 a3=0 items=0 ppid=4571 pid=4759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.490000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:43:21.490000 audit[4759]: AVC avc: denied { bpf } for pid=4759 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.490000 audit[4759]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd898939c0 a2=28 a3=0 items=0 ppid=4571 pid=4759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.490000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:43:21.490000 audit[4759]: AVC avc: denied { bpf } for pid=4759 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.490000 audit[4759]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd898939a0 a2=28 a3=0 items=0 ppid=4571 pid=4759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.490000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:43:21.490000 audit[4759]: AVC avc: denied { bpf } for pid=4759 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.490000 audit[4759]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd898939c0 a2=28 a3=0 items=0 ppid=4571 pid=4759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.490000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:43:21.490000 audit[4759]: AVC avc: denied { bpf } for pid=4759 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.490000 audit[4759]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd89893990 a2=28 a3=0 items=0 ppid=4571 pid=4759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.490000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:43:21.491000 audit[4759]: AVC avc: denied { bpf } for pid=4759 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.491000 audit[4759]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd89893a00 a2=28 a3=0 items=0 ppid=4571 pid=4759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.491000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:43:21.491000 audit[4759]: AVC avc: denied { perfmon } for pid=4759 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.491000 audit[4759]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffd898937b0 a2=50 a3=1 items=0 ppid=4571 pid=4759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.491000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:43:21.491000 audit[4759]: AVC avc: denied { bpf } for pid=4759 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.491000 audit[4759]: AVC avc: denied { bpf } for pid=4759 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.491000 audit[4759]: AVC avc: denied { perfmon } for pid=4759 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.491000 audit[4759]: AVC avc: denied { perfmon } for pid=4759 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.491000 audit[4759]: AVC avc: denied { perfmon } for pid=4759 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.491000 audit[4759]: AVC avc: denied { perfmon } for pid=4759 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.491000 audit[4759]: AVC avc: denied { perfmon } for pid=4759 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.491000 audit[4759]: AVC avc: denied { bpf } for pid=4759 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.491000 audit[4759]: AVC avc: denied { bpf } for pid=4759 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.491000 audit: BPF prog-id=28 op=LOAD Mar 17 18:43:21.491000 audit[4759]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd898937b0 a2=94 a3=5 items=0 ppid=4571 pid=4759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.491000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:43:21.491000 audit: BPF prog-id=28 op=UNLOAD Mar 17 18:43:21.491000 audit[4759]: AVC avc: denied { perfmon } for pid=4759 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.491000 audit[4759]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffd89893860 a2=50 a3=1 items=0 ppid=4571 pid=4759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.491000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:43:21.491000 audit[4759]: AVC avc: denied { bpf } for pid=4759 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.491000 audit[4759]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffd89893980 a2=4 a3=38 items=0 ppid=4571 pid=4759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.491000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:43:21.491000 audit[4759]: AVC avc: denied { bpf } for pid=4759 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.491000 audit[4759]: AVC avc: denied { bpf } for pid=4759 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.491000 audit[4759]: AVC avc: denied { perfmon } for pid=4759 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.491000 audit[4759]: AVC avc: denied { bpf } for pid=4759 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.491000 audit[4759]: AVC avc: denied { perfmon } for pid=4759 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.491000 audit[4759]: AVC avc: denied { perfmon } for pid=4759 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.491000 audit[4759]: AVC avc: denied { perfmon } for pid=4759 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.491000 audit[4759]: AVC avc: denied { perfmon } for pid=4759 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.491000 audit[4759]: AVC avc: denied { perfmon } for pid=4759 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.491000 audit[4759]: AVC avc: denied { bpf } for pid=4759 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.491000 audit[4759]: AVC avc: denied { confidentiality } for pid=4759 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Mar 17 18:43:21.491000 audit[4759]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd898939d0 a2=94 a3=6 items=0 ppid=4571 pid=4759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.491000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:43:21.491000 audit[4759]: AVC avc: denied { bpf } for pid=4759 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.491000 audit[4759]: AVC avc: denied { bpf } for pid=4759 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.491000 audit[4759]: AVC avc: denied { perfmon } for pid=4759 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.491000 audit[4759]: AVC avc: denied { bpf } for pid=4759 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.491000 audit[4759]: AVC avc: denied { perfmon } for pid=4759 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.491000 audit[4759]: AVC avc: denied { perfmon } for pid=4759 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.491000 audit[4759]: AVC avc: denied { perfmon } for pid=4759 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.491000 audit[4759]: AVC avc: denied { perfmon } for pid=4759 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.491000 audit[4759]: AVC avc: denied { perfmon } for pid=4759 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.491000 audit[4759]: AVC avc: denied { bpf } for pid=4759 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.491000 audit[4759]: AVC avc: denied { confidentiality } for pid=4759 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Mar 17 18:43:21.491000 audit[4759]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd89893180 a2=94 a3=83 items=0 ppid=4571 pid=4759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.491000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:43:21.491000 audit[4759]: AVC avc: denied { bpf } for pid=4759 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.491000 audit[4759]: AVC avc: denied { bpf } for pid=4759 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.491000 audit[4759]: AVC avc: denied { perfmon } for pid=4759 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.491000 audit[4759]: AVC avc: denied { bpf } for pid=4759 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.491000 audit[4759]: AVC avc: denied { perfmon } for pid=4759 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.491000 audit[4759]: AVC avc: denied { perfmon } for pid=4759 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.491000 audit[4759]: AVC avc: denied { perfmon } for pid=4759 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.491000 audit[4759]: AVC avc: denied { perfmon } for pid=4759 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.491000 audit[4759]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd89893180 a2=94 a3=83 items=0 ppid=4571 pid=4759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.491000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:43:21.492000 audit[4759]: AVC avc: denied { bpf } for pid=4759 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.492000 audit[4759]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffd89894bc0 a2=10 a3=208 items=0 ppid=4571 pid=4759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.492000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:43:21.492000 audit[4759]: AVC avc: denied { bpf } for pid=4759 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.492000 audit[4759]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffd89894a60 a2=10 a3=3 items=0 ppid=4571 pid=4759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.492000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:43:21.492000 audit[4759]: AVC avc: denied { bpf } for pid=4759 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.492000 audit[4759]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffd89894a00 a2=10 a3=3 items=0 ppid=4571 pid=4759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.492000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:43:21.492000 audit[4759]: AVC avc: denied { bpf } for pid=4759 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:43:21.492000 audit[4759]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffd89894a00 a2=10 a3=7 items=0 ppid=4571 pid=4759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.492000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:43:21.500000 audit: BPF prog-id=23 op=UNLOAD Mar 17 18:43:21.545000 audit[4844]: NETFILTER_CFG table=mangle:97 family=2 entries=16 op=nft_register_chain pid=4844 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Mar 17 18:43:21.545000 audit[4844]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffe42b99ef0 a2=0 a3=7ffe42b99edc items=0 ppid=4571 pid=4844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.545000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Mar 17 18:43:21.549000 audit[4843]: NETFILTER_CFG table=nat:98 family=2 entries=15 op=nft_register_chain pid=4843 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Mar 17 18:43:21.549000 audit[4843]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffe2e7be2d0 a2=0 a3=7ffe2e7be2bc items=0 ppid=4571 pid=4843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.549000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Mar 17 18:43:21.552000 audit[4847]: NETFILTER_CFG table=filter:99 family=2 entries=75 op=nft_register_chain pid=4847 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Mar 17 18:43:21.552000 audit[4847]: SYSCALL arch=c000003e syscall=46 success=yes exit=40748 a0=3 a1=7ffedf9c89b0 a2=0 a3=7ffedf9c899c items=0 ppid=4571 pid=4847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.552000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Mar 17 18:43:21.554000 audit[4842]: NETFILTER_CFG table=raw:100 family=2 entries=21 op=nft_register_chain pid=4842 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Mar 17 18:43:21.554000 audit[4842]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7fff9effa9b0 a2=0 a3=7fff9effa99c items=0 ppid=4571 pid=4842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.554000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Mar 17 18:43:21.585835 sshd[4724]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:21.586000 audit[4724]: USER_END pid=4724 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:21.586000 audit[4724]: CRED_DISP pid=4724 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:21.587000 audit[4852]: NETFILTER_CFG table=filter:101 family=2 entries=34 op=nft_register_chain pid=4852 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Mar 17 18:43:21.588377 systemd[1]: sshd@18-10.0.0.81:22-10.0.0.1:36298.service: Deactivated successfully. Mar 17 18:43:21.587000 audit[4852]: SYSCALL arch=c000003e syscall=46 success=yes exit=20328 a0=3 a1=7ffcff54e0a0 a2=0 a3=7ffcff54e08c items=0 ppid=4571 pid=4852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:21.587000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Mar 17 18:43:21.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.81:22-10.0.0.1:36298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:43:21.589401 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 18:43:21.589879 systemd-logind[1293]: Session 19 logged out. Waiting for processes to exit. Mar 17 18:43:21.590802 systemd-logind[1293]: Removed session 19. Mar 17 18:43:22.479177 systemd-networkd[1085]: cali777a0388350: Gained IPv6LL Mar 17 18:43:22.915021 systemd-networkd[1085]: vxlan.calico: Gained IPv6LL Mar 17 18:43:23.117514 kubelet[2216]: E0317 18:43:23.117482 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:23.136279 env[1309]: time="2025-03-17T18:43:23.136236362Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:23.138166 env[1309]: time="2025-03-17T18:43:23.138129086Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:23.141373 env[1309]: time="2025-03-17T18:43:23.141329291Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:23.142791 env[1309]: time="2025-03-17T18:43:23.142766939Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:23.143280 env[1309]: time="2025-03-17T18:43:23.143258866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Mar 17 18:43:23.144175 env[1309]: time="2025-03-17T18:43:23.144154463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Mar 17 18:43:23.145735 env[1309]: time="2025-03-17T18:43:23.145711618Z" level=info msg="CreateContainer within sandbox \"21badf43d7dfa03f1316746d88e3371c09dfb0e61b18bfa7e1e45b85197b8e0d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 17 18:43:23.156564 env[1309]: time="2025-03-17T18:43:23.156532528Z" level=info msg="CreateContainer within sandbox \"21badf43d7dfa03f1316746d88e3371c09dfb0e61b18bfa7e1e45b85197b8e0d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"46d560d68ea0149e9682b31e84a94b315e1c33f7ef2cf97ef47cc243d9e69ff7\"" Mar 17 18:43:23.156934 env[1309]: time="2025-03-17T18:43:23.156907672Z" level=info msg="StartContainer for \"46d560d68ea0149e9682b31e84a94b315e1c33f7ef2cf97ef47cc243d9e69ff7\"" Mar 17 18:43:23.205376 env[1309]: time="2025-03-17T18:43:23.205328063Z" level=info msg="StartContainer for \"46d560d68ea0149e9682b31e84a94b315e1c33f7ef2cf97ef47cc243d9e69ff7\" returns successfully" Mar 17 18:43:23.610629 env[1309]: time="2025-03-17T18:43:23.610513518Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:23.612557 env[1309]: time="2025-03-17T18:43:23.612519599Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:23.614035 env[1309]: time="2025-03-17T18:43:23.613999418Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:23.615665 env[1309]: time="2025-03-17T18:43:23.615641024Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:23.615974 env[1309]: time="2025-03-17T18:43:23.615948280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Mar 17 18:43:23.617846 env[1309]: time="2025-03-17T18:43:23.617816688Z" level=info msg="CreateContainer within sandbox \"aee38d0fdbbe963365defeea3ac1a6fbbaff7136cfc1793e76a9346a972118c9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 17 18:43:23.631013 env[1309]: time="2025-03-17T18:43:23.630968346Z" level=info msg="CreateContainer within sandbox \"aee38d0fdbbe963365defeea3ac1a6fbbaff7136cfc1793e76a9346a972118c9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"136bf88dc9bbb92e04fab3e00bf51efd8f187a673621d011aaac284f0b8ac0c8\"" Mar 17 18:43:23.631628 env[1309]: time="2025-03-17T18:43:23.631602174Z" level=info msg="StartContainer for \"136bf88dc9bbb92e04fab3e00bf51efd8f187a673621d011aaac284f0b8ac0c8\"" Mar 17 18:43:24.022598 env[1309]: time="2025-03-17T18:43:24.022539617Z" level=info msg="StartContainer for \"136bf88dc9bbb92e04fab3e00bf51efd8f187a673621d011aaac284f0b8ac0c8\" returns successfully" Mar 17 18:43:24.045260 kubelet[2216]: I0317 18:43:24.045200 2216 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-9bd99c456-c2kg7" podStartSLOduration=64.268240248 podStartE2EDuration="1m8.045183578s" podCreationTimestamp="2025-03-17 18:42:16 +0000 UTC" firstStartedPulling="2025-03-17 18:43:19.367059563 +0000 UTC m=+84.330628609" lastFinishedPulling="2025-03-17 18:43:23.144002893 +0000 UTC m=+88.107571939" observedRunningTime="2025-03-17 18:43:24.035880015 +0000 UTC m=+88.999449071" watchObservedRunningTime="2025-03-17 18:43:24.045183578 +0000 UTC m=+89.008752614" Mar 17 18:43:24.045489 kubelet[2216]: I0317 18:43:24.045439 2216 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-9bd99c456-2chqm" podStartSLOduration=65.899795133 podStartE2EDuration="1m8.045436188s" podCreationTimestamp="2025-03-17 18:42:16 +0000 UTC" firstStartedPulling="2025-03-17 18:43:21.470999222 +0000 UTC m=+86.434568268" lastFinishedPulling="2025-03-17 18:43:23.616640277 +0000 UTC m=+88.580209323" observedRunningTime="2025-03-17 18:43:24.04471236 +0000 UTC m=+89.008281396" watchObservedRunningTime="2025-03-17 18:43:24.045436188 +0000 UTC m=+89.009005235" Mar 17 18:43:24.109000 audit[4932]: NETFILTER_CFG table=filter:102 family=2 entries=16 op=nft_register_rule pid=4932 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:43:24.109000 audit[4932]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff124331b0 a2=0 a3=7fff1243319c items=0 ppid=2420 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:24.109000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:43:24.114000 audit[4932]: NETFILTER_CFG table=nat:103 family=2 entries=14 op=nft_register_rule pid=4932 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:43:24.114000 audit[4932]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff124331b0 a2=0 a3=0 items=0 ppid=2420 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:24.114000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:43:24.124000 audit[4934]: NETFILTER_CFG table=filter:104 family=2 entries=15 op=nft_register_rule pid=4934 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:43:24.124000 audit[4934]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffc3ce4a900 a2=0 a3=7ffc3ce4a8ec items=0 ppid=2420 pid=4934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:24.124000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:43:24.128000 audit[4934]: NETFILTER_CFG table=nat:105 family=2 entries=21 op=nft_register_chain pid=4934 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:43:24.128000 audit[4934]: SYSCALL arch=c000003e syscall=46 success=yes exit=7044 a0=3 a1=7ffc3ce4a900 a2=0 a3=7ffc3ce4a8ec items=0 ppid=2420 pid=4934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:24.128000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:43:25.137000 audit[4937]: NETFILTER_CFG table=filter:106 family=2 entries=14 op=nft_register_rule pid=4937 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:43:25.137000 audit[4937]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffd4397e2e0 a2=0 a3=7ffd4397e2cc items=0 ppid=2420 pid=4937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:25.137000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:43:25.148000 audit[4937]: NETFILTER_CFG table=nat:107 family=2 entries=28 op=nft_register_chain pid=4937 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:43:25.148000 audit[4937]: SYSCALL arch=c000003e syscall=46 success=yes exit=8932 a0=3 a1=7ffd4397e2e0 a2=0 a3=7ffd4397e2cc items=0 ppid=2420 pid=4937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:25.148000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:43:26.588349 systemd[1]: Started sshd@19-10.0.0.81:22-10.0.0.1:40444.service. Mar 17 18:43:26.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.81:22-10.0.0.1:40444 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:43:26.589550 kernel: kauditd_printk_skb: 359 callbacks suppressed Mar 17 18:43:26.589624 kernel: audit: type=1130 audit(1742237006.587:491): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.81:22-10.0.0.1:40444 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:43:26.621000 audit[4946]: USER_ACCT pid=4946 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:26.622519 sshd[4946]: Accepted publickey for core from 10.0.0.1 port 40444 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:43:26.625000 audit[4946]: CRED_ACQ pid=4946 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:26.626457 sshd[4946]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:26.630138 kernel: audit: type=1101 audit(1742237006.621:492): pid=4946 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:26.630194 kernel: audit: type=1103 audit(1742237006.625:493): pid=4946 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:26.630217 kernel: audit: type=1006 audit(1742237006.625:494): pid=4946 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Mar 17 18:43:26.629815 systemd-logind[1293]: New session 20 of user core. Mar 17 18:43:26.630522 systemd[1]: Started session-20.scope. Mar 17 18:43:26.636361 kernel: audit: type=1300 audit(1742237006.625:494): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffafac9790 a2=3 a3=0 items=0 ppid=1 pid=4946 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:26.625000 audit[4946]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffafac9790 a2=3 a3=0 items=0 ppid=1 pid=4946 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:26.625000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:43:26.637963 kernel: audit: type=1327 audit(1742237006.625:494): proctitle=737368643A20636F7265205B707269765D Mar 17 18:43:26.638029 kernel: audit: type=1105 audit(1742237006.633:495): pid=4946 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:26.633000 audit[4946]: USER_START pid=4946 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:26.634000 audit[4951]: CRED_ACQ pid=4951 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:26.645602 kernel: audit: type=1103 audit(1742237006.634:496): pid=4951 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:26.740803 sshd[4946]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:26.740000 audit[4946]: USER_END pid=4946 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:26.742749 systemd[1]: sshd@19-10.0.0.81:22-10.0.0.1:40444.service: Deactivated successfully. Mar 17 18:43:26.743501 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 18:43:26.740000 audit[4946]: CRED_DISP pid=4946 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:26.746459 systemd-logind[1293]: Session 20 logged out. Waiting for processes to exit. Mar 17 18:43:26.747221 systemd-logind[1293]: Removed session 20. Mar 17 18:43:26.749558 kernel: audit: type=1106 audit(1742237006.740:497): pid=4946 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:26.749632 kernel: audit: type=1104 audit(1742237006.740:498): pid=4946 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:26.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.81:22-10.0.0.1:40444 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:43:27.117801 env[1309]: time="2025-03-17T18:43:27.117748237Z" level=info msg="StopPodSandbox for \"a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb\"" Mar 17 18:43:27.201447 env[1309]: 2025-03-17 18:43:27.169 [INFO][4979] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb" Mar 17 18:43:27.201447 env[1309]: 2025-03-17 18:43:27.169 [INFO][4979] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb" iface="eth0" netns="/var/run/netns/cni-e516e4f7-3859-339f-ba39-3002729b1894" Mar 17 18:43:27.201447 env[1309]: 2025-03-17 18:43:27.169 [INFO][4979] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb" iface="eth0" netns="/var/run/netns/cni-e516e4f7-3859-339f-ba39-3002729b1894" Mar 17 18:43:27.201447 env[1309]: 2025-03-17 18:43:27.169 [INFO][4979] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb" iface="eth0" netns="/var/run/netns/cni-e516e4f7-3859-339f-ba39-3002729b1894" Mar 17 18:43:27.201447 env[1309]: 2025-03-17 18:43:27.169 [INFO][4979] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb" Mar 17 18:43:27.201447 env[1309]: 2025-03-17 18:43:27.169 [INFO][4979] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb" Mar 17 18:43:27.201447 env[1309]: 2025-03-17 18:43:27.190 [INFO][4987] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb" HandleID="k8s-pod-network.a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb" Workload="localhost-k8s-csi--node--driver--rwbh5-eth0" Mar 17 18:43:27.201447 env[1309]: 2025-03-17 18:43:27.190 [INFO][4987] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:43:27.201447 env[1309]: 2025-03-17 18:43:27.190 [INFO][4987] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:43:27.201447 env[1309]: 2025-03-17 18:43:27.196 [WARNING][4987] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb" HandleID="k8s-pod-network.a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb" Workload="localhost-k8s-csi--node--driver--rwbh5-eth0" Mar 17 18:43:27.201447 env[1309]: 2025-03-17 18:43:27.196 [INFO][4987] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb" HandleID="k8s-pod-network.a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb" Workload="localhost-k8s-csi--node--driver--rwbh5-eth0" Mar 17 18:43:27.201447 env[1309]: 2025-03-17 18:43:27.197 [INFO][4987] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:43:27.201447 env[1309]: 2025-03-17 18:43:27.199 [INFO][4979] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb" Mar 17 18:43:27.202058 env[1309]: time="2025-03-17T18:43:27.201573849Z" level=info msg="TearDown network for sandbox \"a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb\" successfully" Mar 17 18:43:27.202058 env[1309]: time="2025-03-17T18:43:27.201622352Z" level=info msg="StopPodSandbox for \"a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb\" returns successfully" Mar 17 18:43:27.202768 env[1309]: time="2025-03-17T18:43:27.202738635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rwbh5,Uid:06aa47e7-14c4-4c99-9d64-88ed0bae7c98,Namespace:calico-system,Attempt:1,}" Mar 17 18:43:27.204531 systemd[1]: run-netns-cni\x2de516e4f7\x2d3859\x2d339f\x2dba39\x2d3002729b1894.mount: Deactivated successfully. Mar 17 18:43:27.305179 systemd-networkd[1085]: calif83b62df300: Link UP Mar 17 18:43:27.307403 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Mar 17 18:43:27.307451 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calif83b62df300: link becomes ready Mar 17 18:43:27.307754 systemd-networkd[1085]: calif83b62df300: Gained carrier Mar 17 18:43:27.320839 env[1309]: 2025-03-17 18:43:27.246 [INFO][4995] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--rwbh5-eth0 csi-node-driver- calico-system 06aa47e7-14c4-4c99-9d64-88ed0bae7c98 1145 0 2025-03-17 18:42:16 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-rwbh5 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif83b62df300 [] []}} ContainerID="44575608180746b3fb7c70f760365993b7a02fc995b9d444470651f0f3d78d4f" Namespace="calico-system" Pod="csi-node-driver-rwbh5" WorkloadEndpoint="localhost-k8s-csi--node--driver--rwbh5-" Mar 17 18:43:27.320839 env[1309]: 2025-03-17 18:43:27.246 [INFO][4995] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="44575608180746b3fb7c70f760365993b7a02fc995b9d444470651f0f3d78d4f" Namespace="calico-system" Pod="csi-node-driver-rwbh5" WorkloadEndpoint="localhost-k8s-csi--node--driver--rwbh5-eth0" Mar 17 18:43:27.320839 env[1309]: 2025-03-17 18:43:27.270 [INFO][5009] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="44575608180746b3fb7c70f760365993b7a02fc995b9d444470651f0f3d78d4f" HandleID="k8s-pod-network.44575608180746b3fb7c70f760365993b7a02fc995b9d444470651f0f3d78d4f" Workload="localhost-k8s-csi--node--driver--rwbh5-eth0" Mar 17 18:43:27.320839 env[1309]: 2025-03-17 18:43:27.279 [INFO][5009] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="44575608180746b3fb7c70f760365993b7a02fc995b9d444470651f0f3d78d4f" HandleID="k8s-pod-network.44575608180746b3fb7c70f760365993b7a02fc995b9d444470651f0f3d78d4f" Workload="localhost-k8s-csi--node--driver--rwbh5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000515d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-rwbh5", "timestamp":"2025-03-17 18:43:27.270442637 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 18:43:27.320839 env[1309]: 2025-03-17 18:43:27.279 [INFO][5009] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:43:27.320839 env[1309]: 2025-03-17 18:43:27.279 [INFO][5009] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:43:27.320839 env[1309]: 2025-03-17 18:43:27.279 [INFO][5009] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 18:43:27.320839 env[1309]: 2025-03-17 18:43:27.280 [INFO][5009] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.44575608180746b3fb7c70f760365993b7a02fc995b9d444470651f0f3d78d4f" host="localhost" Mar 17 18:43:27.320839 env[1309]: 2025-03-17 18:43:27.283 [INFO][5009] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 18:43:27.320839 env[1309]: 2025-03-17 18:43:27.286 [INFO][5009] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 18:43:27.320839 env[1309]: 2025-03-17 18:43:27.287 [INFO][5009] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 18:43:27.320839 env[1309]: 2025-03-17 18:43:27.289 [INFO][5009] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 18:43:27.320839 env[1309]: 2025-03-17 18:43:27.289 [INFO][5009] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.44575608180746b3fb7c70f760365993b7a02fc995b9d444470651f0f3d78d4f" host="localhost" Mar 17 18:43:27.320839 env[1309]: 2025-03-17 18:43:27.291 [INFO][5009] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.44575608180746b3fb7c70f760365993b7a02fc995b9d444470651f0f3d78d4f Mar 17 18:43:27.320839 env[1309]: 2025-03-17 18:43:27.296 [INFO][5009] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.44575608180746b3fb7c70f760365993b7a02fc995b9d444470651f0f3d78d4f" host="localhost" Mar 17 18:43:27.320839 env[1309]: 2025-03-17 18:43:27.300 [INFO][5009] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.44575608180746b3fb7c70f760365993b7a02fc995b9d444470651f0f3d78d4f" host="localhost" Mar 17 18:43:27.320839 env[1309]: 2025-03-17 18:43:27.300 [INFO][5009] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.44575608180746b3fb7c70f760365993b7a02fc995b9d444470651f0f3d78d4f" host="localhost" Mar 17 18:43:27.320839 env[1309]: 2025-03-17 18:43:27.300 [INFO][5009] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:43:27.320839 env[1309]: 2025-03-17 18:43:27.300 [INFO][5009] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="44575608180746b3fb7c70f760365993b7a02fc995b9d444470651f0f3d78d4f" HandleID="k8s-pod-network.44575608180746b3fb7c70f760365993b7a02fc995b9d444470651f0f3d78d4f" Workload="localhost-k8s-csi--node--driver--rwbh5-eth0" Mar 17 18:43:27.322305 env[1309]: 2025-03-17 18:43:27.303 [INFO][4995] cni-plugin/k8s.go 386: Populated endpoint ContainerID="44575608180746b3fb7c70f760365993b7a02fc995b9d444470651f0f3d78d4f" Namespace="calico-system" Pod="csi-node-driver-rwbh5" WorkloadEndpoint="localhost-k8s-csi--node--driver--rwbh5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rwbh5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"06aa47e7-14c4-4c99-9d64-88ed0bae7c98", ResourceVersion:"1145", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 42, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-rwbh5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif83b62df300", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:43:27.322305 env[1309]: 2025-03-17 18:43:27.303 [INFO][4995] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="44575608180746b3fb7c70f760365993b7a02fc995b9d444470651f0f3d78d4f" Namespace="calico-system" Pod="csi-node-driver-rwbh5" WorkloadEndpoint="localhost-k8s-csi--node--driver--rwbh5-eth0" Mar 17 18:43:27.322305 env[1309]: 2025-03-17 18:43:27.303 [INFO][4995] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif83b62df300 ContainerID="44575608180746b3fb7c70f760365993b7a02fc995b9d444470651f0f3d78d4f" Namespace="calico-system" Pod="csi-node-driver-rwbh5" WorkloadEndpoint="localhost-k8s-csi--node--driver--rwbh5-eth0" Mar 17 18:43:27.322305 env[1309]: 2025-03-17 18:43:27.308 [INFO][4995] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="44575608180746b3fb7c70f760365993b7a02fc995b9d444470651f0f3d78d4f" Namespace="calico-system" Pod="csi-node-driver-rwbh5" WorkloadEndpoint="localhost-k8s-csi--node--driver--rwbh5-eth0" Mar 17 18:43:27.322305 env[1309]: 2025-03-17 18:43:27.309 [INFO][4995] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="44575608180746b3fb7c70f760365993b7a02fc995b9d444470651f0f3d78d4f" Namespace="calico-system" Pod="csi-node-driver-rwbh5" WorkloadEndpoint="localhost-k8s-csi--node--driver--rwbh5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rwbh5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"06aa47e7-14c4-4c99-9d64-88ed0bae7c98", ResourceVersion:"1145", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 42, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"44575608180746b3fb7c70f760365993b7a02fc995b9d444470651f0f3d78d4f", Pod:"csi-node-driver-rwbh5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif83b62df300", MAC:"06:fb:20:d7:75:6e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:43:27.322305 env[1309]: 2025-03-17 18:43:27.317 [INFO][4995] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="44575608180746b3fb7c70f760365993b7a02fc995b9d444470651f0f3d78d4f" Namespace="calico-system" Pod="csi-node-driver-rwbh5" WorkloadEndpoint="localhost-k8s-csi--node--driver--rwbh5-eth0" Mar 17 18:43:27.329000 audit[5035]: NETFILTER_CFG table=filter:108 family=2 entries=42 op=nft_register_chain pid=5035 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Mar 17 18:43:27.329000 audit[5035]: SYSCALL arch=c000003e syscall=46 success=yes exit=21524 a0=3 a1=7fff26931820 a2=0 a3=7fff2693180c items=0 ppid=4571 pid=5035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:27.329000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Mar 17 18:43:27.331476 env[1309]: time="2025-03-17T18:43:27.331413944Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:43:27.331476 env[1309]: time="2025-03-17T18:43:27.331456565Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:43:27.331548 env[1309]: time="2025-03-17T18:43:27.331469579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:43:27.331656 env[1309]: time="2025-03-17T18:43:27.331610888Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/44575608180746b3fb7c70f760365993b7a02fc995b9d444470651f0f3d78d4f pid=5039 runtime=io.containerd.runc.v2 Mar 17 18:43:27.353174 systemd-resolved[1224]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 18:43:27.363501 env[1309]: time="2025-03-17T18:43:27.363447005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rwbh5,Uid:06aa47e7-14c4-4c99-9d64-88ed0bae7c98,Namespace:calico-system,Attempt:1,} returns sandbox id \"44575608180746b3fb7c70f760365993b7a02fc995b9d444470651f0f3d78d4f\"" Mar 17 18:43:27.364822 env[1309]: time="2025-03-17T18:43:27.364794598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Mar 17 18:43:28.914008 env[1309]: time="2025-03-17T18:43:28.913957673Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:28.916081 env[1309]: time="2025-03-17T18:43:28.916057575Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:28.917637 env[1309]: time="2025-03-17T18:43:28.917617030Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:28.919144 env[1309]: time="2025-03-17T18:43:28.919098567Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:28.919472 env[1309]: time="2025-03-17T18:43:28.919451558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Mar 17 18:43:28.921108 env[1309]: time="2025-03-17T18:43:28.921085855Z" level=info msg="CreateContainer within sandbox \"44575608180746b3fb7c70f760365993b7a02fc995b9d444470651f0f3d78d4f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 17 18:43:28.935302 env[1309]: time="2025-03-17T18:43:28.935253501Z" level=info msg="CreateContainer within sandbox \"44575608180746b3fb7c70f760365993b7a02fc995b9d444470651f0f3d78d4f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"310f3210f41a911327224f481ba2808e900049b8a8d50587b1434f1c0ae7829b\"" Mar 17 18:43:28.935710 env[1309]: time="2025-03-17T18:43:28.935687987Z" level=info msg="StartContainer for \"310f3210f41a911327224f481ba2808e900049b8a8d50587b1434f1c0ae7829b\"" Mar 17 18:43:28.952433 systemd[1]: run-containerd-runc-k8s.io-310f3210f41a911327224f481ba2808e900049b8a8d50587b1434f1c0ae7829b-runc.7rjiQu.mount: Deactivated successfully. Mar 17 18:43:28.979301 env[1309]: time="2025-03-17T18:43:28.979254255Z" level=info msg="StartContainer for \"310f3210f41a911327224f481ba2808e900049b8a8d50587b1434f1c0ae7829b\" returns successfully" Mar 17 18:43:28.980769 env[1309]: time="2025-03-17T18:43:28.980730842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Mar 17 18:43:29.117134 kubelet[2216]: E0317 18:43:29.117101 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:29.118222 env[1309]: time="2025-03-17T18:43:29.118181911Z" level=info msg="StopPodSandbox for \"f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af\"" Mar 17 18:43:29.118562 env[1309]: time="2025-03-17T18:43:29.118540272Z" level=info msg="StopPodSandbox for \"f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9\"" Mar 17 18:43:29.189943 systemd-networkd[1085]: calif83b62df300: Gained IPv6LL Mar 17 18:43:29.198099 env[1309]: 2025-03-17 18:43:29.163 [INFO][5143] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af" Mar 17 18:43:29.198099 env[1309]: 2025-03-17 18:43:29.163 [INFO][5143] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af" iface="eth0" netns="/var/run/netns/cni-342fa1c3-5b01-2b0d-d29b-47dab4c09e2d" Mar 17 18:43:29.198099 env[1309]: 2025-03-17 18:43:29.163 [INFO][5143] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af" iface="eth0" netns="/var/run/netns/cni-342fa1c3-5b01-2b0d-d29b-47dab4c09e2d" Mar 17 18:43:29.198099 env[1309]: 2025-03-17 18:43:29.164 [INFO][5143] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af" iface="eth0" netns="/var/run/netns/cni-342fa1c3-5b01-2b0d-d29b-47dab4c09e2d" Mar 17 18:43:29.198099 env[1309]: 2025-03-17 18:43:29.164 [INFO][5143] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af" Mar 17 18:43:29.198099 env[1309]: 2025-03-17 18:43:29.164 [INFO][5143] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af" Mar 17 18:43:29.198099 env[1309]: 2025-03-17 18:43:29.186 [INFO][5159] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af" HandleID="k8s-pod-network.f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af" Workload="localhost-k8s-coredns--7db6d8ff4d--qvfwr-eth0" Mar 17 18:43:29.198099 env[1309]: 2025-03-17 18:43:29.186 [INFO][5159] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:43:29.198099 env[1309]: 2025-03-17 18:43:29.186 [INFO][5159] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:43:29.198099 env[1309]: 2025-03-17 18:43:29.191 [WARNING][5159] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af" HandleID="k8s-pod-network.f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af" Workload="localhost-k8s-coredns--7db6d8ff4d--qvfwr-eth0" Mar 17 18:43:29.198099 env[1309]: 2025-03-17 18:43:29.191 [INFO][5159] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af" HandleID="k8s-pod-network.f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af" Workload="localhost-k8s-coredns--7db6d8ff4d--qvfwr-eth0" Mar 17 18:43:29.198099 env[1309]: 2025-03-17 18:43:29.193 [INFO][5159] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:43:29.198099 env[1309]: 2025-03-17 18:43:29.196 [INFO][5143] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af" Mar 17 18:43:29.198963 env[1309]: time="2025-03-17T18:43:29.198919539Z" level=info msg="TearDown network for sandbox \"f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af\" successfully" Mar 17 18:43:29.199024 env[1309]: time="2025-03-17T18:43:29.198963562Z" level=info msg="StopPodSandbox for \"f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af\" returns successfully" Mar 17 18:43:29.199337 kubelet[2216]: E0317 18:43:29.199305 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:29.199752 env[1309]: time="2025-03-17T18:43:29.199717415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qvfwr,Uid:182af1a9-314d-44fe-9729-65e5c19acc5a,Namespace:kube-system,Attempt:1,}" Mar 17 18:43:29.202768 env[1309]: 2025-03-17 18:43:29.161 [INFO][5144] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9" Mar 17 18:43:29.202768 env[1309]: 2025-03-17 18:43:29.162 [INFO][5144] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9" iface="eth0" netns="/var/run/netns/cni-aa7ca881-1297-41f9-62ac-2edf1e93d7cf" Mar 17 18:43:29.202768 env[1309]: 2025-03-17 18:43:29.162 [INFO][5144] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9" iface="eth0" netns="/var/run/netns/cni-aa7ca881-1297-41f9-62ac-2edf1e93d7cf" Mar 17 18:43:29.202768 env[1309]: 2025-03-17 18:43:29.162 [INFO][5144] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9" iface="eth0" netns="/var/run/netns/cni-aa7ca881-1297-41f9-62ac-2edf1e93d7cf" Mar 17 18:43:29.202768 env[1309]: 2025-03-17 18:43:29.162 [INFO][5144] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9" Mar 17 18:43:29.202768 env[1309]: 2025-03-17 18:43:29.162 [INFO][5144] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9" Mar 17 18:43:29.202768 env[1309]: 2025-03-17 18:43:29.186 [INFO][5158] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9" HandleID="k8s-pod-network.f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9" Workload="localhost-k8s-calico--kube--controllers--75cb744445--d9hht-eth0" Mar 17 18:43:29.202768 env[1309]: 2025-03-17 18:43:29.186 [INFO][5158] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:43:29.202768 env[1309]: 2025-03-17 18:43:29.193 [INFO][5158] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:43:29.202768 env[1309]: 2025-03-17 18:43:29.197 [WARNING][5158] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9" HandleID="k8s-pod-network.f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9" Workload="localhost-k8s-calico--kube--controllers--75cb744445--d9hht-eth0" Mar 17 18:43:29.202768 env[1309]: 2025-03-17 18:43:29.197 [INFO][5158] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9" HandleID="k8s-pod-network.f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9" Workload="localhost-k8s-calico--kube--controllers--75cb744445--d9hht-eth0" Mar 17 18:43:29.202768 env[1309]: 2025-03-17 18:43:29.198 [INFO][5158] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:43:29.202768 env[1309]: 2025-03-17 18:43:29.200 [INFO][5144] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9" Mar 17 18:43:29.203439 env[1309]: time="2025-03-17T18:43:29.203409412Z" level=info msg="TearDown network for sandbox \"f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9\" successfully" Mar 17 18:43:29.203492 env[1309]: time="2025-03-17T18:43:29.203439418Z" level=info msg="StopPodSandbox for \"f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9\" returns successfully" Mar 17 18:43:29.204102 env[1309]: time="2025-03-17T18:43:29.204049128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75cb744445-d9hht,Uid:7b65dbfa-20f5-4d4f-8c03-972a550ae421,Namespace:calico-system,Attempt:1,}" Mar 17 18:43:29.309465 systemd-networkd[1085]: cali10dbb1f0f29: Link UP Mar 17 18:43:29.311652 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Mar 17 18:43:29.311706 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali10dbb1f0f29: link becomes ready Mar 17 18:43:29.311828 systemd-networkd[1085]: cali10dbb1f0f29: Gained carrier Mar 17 18:43:29.325505 env[1309]: 2025-03-17 18:43:29.248 [INFO][5173] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--qvfwr-eth0 coredns-7db6d8ff4d- kube-system 182af1a9-314d-44fe-9729-65e5c19acc5a 1172 0 2025-03-17 18:42:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-qvfwr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali10dbb1f0f29 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="51e3ac3348e8c59f85a8113331c9d188eac06bdad9caeaf7c62de6c82c4ca7dc" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qvfwr" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--qvfwr-" Mar 17 18:43:29.325505 env[1309]: 2025-03-17 18:43:29.248 [INFO][5173] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="51e3ac3348e8c59f85a8113331c9d188eac06bdad9caeaf7c62de6c82c4ca7dc" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qvfwr" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--qvfwr-eth0" Mar 17 18:43:29.325505 env[1309]: 2025-03-17 18:43:29.272 [INFO][5200] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="51e3ac3348e8c59f85a8113331c9d188eac06bdad9caeaf7c62de6c82c4ca7dc" HandleID="k8s-pod-network.51e3ac3348e8c59f85a8113331c9d188eac06bdad9caeaf7c62de6c82c4ca7dc" Workload="localhost-k8s-coredns--7db6d8ff4d--qvfwr-eth0" Mar 17 18:43:29.325505 env[1309]: 2025-03-17 18:43:29.281 [INFO][5200] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="51e3ac3348e8c59f85a8113331c9d188eac06bdad9caeaf7c62de6c82c4ca7dc" HandleID="k8s-pod-network.51e3ac3348e8c59f85a8113331c9d188eac06bdad9caeaf7c62de6c82c4ca7dc" Workload="localhost-k8s-coredns--7db6d8ff4d--qvfwr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000512d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-qvfwr", "timestamp":"2025-03-17 18:43:29.272975788 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 18:43:29.325505 env[1309]: 2025-03-17 18:43:29.281 [INFO][5200] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:43:29.325505 env[1309]: 2025-03-17 18:43:29.281 [INFO][5200] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:43:29.325505 env[1309]: 2025-03-17 18:43:29.281 [INFO][5200] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 18:43:29.325505 env[1309]: 2025-03-17 18:43:29.283 [INFO][5200] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.51e3ac3348e8c59f85a8113331c9d188eac06bdad9caeaf7c62de6c82c4ca7dc" host="localhost" Mar 17 18:43:29.325505 env[1309]: 2025-03-17 18:43:29.287 [INFO][5200] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 18:43:29.325505 env[1309]: 2025-03-17 18:43:29.289 [INFO][5200] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 18:43:29.325505 env[1309]: 2025-03-17 18:43:29.290 [INFO][5200] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 18:43:29.325505 env[1309]: 2025-03-17 18:43:29.292 [INFO][5200] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 18:43:29.325505 env[1309]: 2025-03-17 18:43:29.293 [INFO][5200] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.51e3ac3348e8c59f85a8113331c9d188eac06bdad9caeaf7c62de6c82c4ca7dc" host="localhost" Mar 17 18:43:29.325505 env[1309]: 2025-03-17 18:43:29.294 [INFO][5200] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.51e3ac3348e8c59f85a8113331c9d188eac06bdad9caeaf7c62de6c82c4ca7dc Mar 17 18:43:29.325505 env[1309]: 2025-03-17 18:43:29.297 [INFO][5200] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.51e3ac3348e8c59f85a8113331c9d188eac06bdad9caeaf7c62de6c82c4ca7dc" host="localhost" Mar 17 18:43:29.325505 env[1309]: 2025-03-17 18:43:29.303 [INFO][5200] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.51e3ac3348e8c59f85a8113331c9d188eac06bdad9caeaf7c62de6c82c4ca7dc" host="localhost" Mar 17 18:43:29.325505 env[1309]: 2025-03-17 18:43:29.303 [INFO][5200] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.51e3ac3348e8c59f85a8113331c9d188eac06bdad9caeaf7c62de6c82c4ca7dc" host="localhost" Mar 17 18:43:29.325505 env[1309]: 2025-03-17 18:43:29.303 [INFO][5200] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:43:29.325505 env[1309]: 2025-03-17 18:43:29.304 [INFO][5200] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="51e3ac3348e8c59f85a8113331c9d188eac06bdad9caeaf7c62de6c82c4ca7dc" HandleID="k8s-pod-network.51e3ac3348e8c59f85a8113331c9d188eac06bdad9caeaf7c62de6c82c4ca7dc" Workload="localhost-k8s-coredns--7db6d8ff4d--qvfwr-eth0" Mar 17 18:43:29.326579 env[1309]: 2025-03-17 18:43:29.306 [INFO][5173] cni-plugin/k8s.go 386: Populated endpoint ContainerID="51e3ac3348e8c59f85a8113331c9d188eac06bdad9caeaf7c62de6c82c4ca7dc" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qvfwr" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--qvfwr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--qvfwr-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"182af1a9-314d-44fe-9729-65e5c19acc5a", ResourceVersion:"1172", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 42, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-qvfwr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali10dbb1f0f29", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:43:29.326579 env[1309]: 2025-03-17 18:43:29.306 [INFO][5173] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="51e3ac3348e8c59f85a8113331c9d188eac06bdad9caeaf7c62de6c82c4ca7dc" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qvfwr" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--qvfwr-eth0" Mar 17 18:43:29.326579 env[1309]: 2025-03-17 18:43:29.306 [INFO][5173] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali10dbb1f0f29 ContainerID="51e3ac3348e8c59f85a8113331c9d188eac06bdad9caeaf7c62de6c82c4ca7dc" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qvfwr" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--qvfwr-eth0" Mar 17 18:43:29.326579 env[1309]: 2025-03-17 18:43:29.312 [INFO][5173] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="51e3ac3348e8c59f85a8113331c9d188eac06bdad9caeaf7c62de6c82c4ca7dc" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qvfwr" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--qvfwr-eth0" Mar 17 18:43:29.326579 env[1309]: 2025-03-17 18:43:29.312 [INFO][5173] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="51e3ac3348e8c59f85a8113331c9d188eac06bdad9caeaf7c62de6c82c4ca7dc" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qvfwr" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--qvfwr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--qvfwr-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"182af1a9-314d-44fe-9729-65e5c19acc5a", ResourceVersion:"1172", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 42, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"51e3ac3348e8c59f85a8113331c9d188eac06bdad9caeaf7c62de6c82c4ca7dc", Pod:"coredns-7db6d8ff4d-qvfwr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali10dbb1f0f29", MAC:"86:4e:01:3a:91:f5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:43:29.326579 env[1309]: 2025-03-17 18:43:29.323 [INFO][5173] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="51e3ac3348e8c59f85a8113331c9d188eac06bdad9caeaf7c62de6c82c4ca7dc" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qvfwr" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--qvfwr-eth0" Mar 17 18:43:29.339000 audit[5239]: NETFILTER_CFG table=filter:109 family=2 entries=46 op=nft_register_chain pid=5239 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Mar 17 18:43:29.339000 audit[5239]: SYSCALL arch=c000003e syscall=46 success=yes exit=22712 a0=3 a1=7ffe7b6899b0 a2=0 a3=7ffe7b68999c items=0 ppid=4571 pid=5239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:29.339000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Mar 17 18:43:29.341740 env[1309]: time="2025-03-17T18:43:29.341686518Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:43:29.341922 env[1309]: time="2025-03-17T18:43:29.341899773Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:43:29.342044 env[1309]: time="2025-03-17T18:43:29.342021574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:43:29.342413 env[1309]: time="2025-03-17T18:43:29.342348556Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/51e3ac3348e8c59f85a8113331c9d188eac06bdad9caeaf7c62de6c82c4ca7dc pid=5243 runtime=io.containerd.runc.v2 Mar 17 18:43:29.345311 systemd-networkd[1085]: cali9f312783f28: Link UP Mar 17 18:43:29.348531 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali9f312783f28: link becomes ready Mar 17 18:43:29.348092 systemd-networkd[1085]: cali9f312783f28: Gained carrier Mar 17 18:43:29.362288 env[1309]: 2025-03-17 18:43:29.247 [INFO][5184] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--75cb744445--d9hht-eth0 calico-kube-controllers-75cb744445- calico-system 7b65dbfa-20f5-4d4f-8c03-972a550ae421 1171 0 2025-03-17 18:42:16 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:75cb744445 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-75cb744445-d9hht eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9f312783f28 [] []}} ContainerID="f8c62f297a29d2f13edc0dadd57bd1f527c8c2a5c208e13ec1091f9891176701" Namespace="calico-system" Pod="calico-kube-controllers-75cb744445-d9hht" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75cb744445--d9hht-" Mar 17 18:43:29.362288 env[1309]: 2025-03-17 18:43:29.247 [INFO][5184] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f8c62f297a29d2f13edc0dadd57bd1f527c8c2a5c208e13ec1091f9891176701" Namespace="calico-system" Pod="calico-kube-controllers-75cb744445-d9hht" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75cb744445--d9hht-eth0" Mar 17 18:43:29.362288 env[1309]: 2025-03-17 18:43:29.277 [INFO][5205] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f8c62f297a29d2f13edc0dadd57bd1f527c8c2a5c208e13ec1091f9891176701" HandleID="k8s-pod-network.f8c62f297a29d2f13edc0dadd57bd1f527c8c2a5c208e13ec1091f9891176701" Workload="localhost-k8s-calico--kube--controllers--75cb744445--d9hht-eth0" Mar 17 18:43:29.362288 env[1309]: 2025-03-17 18:43:29.285 [INFO][5205] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f8c62f297a29d2f13edc0dadd57bd1f527c8c2a5c208e13ec1091f9891176701" HandleID="k8s-pod-network.f8c62f297a29d2f13edc0dadd57bd1f527c8c2a5c208e13ec1091f9891176701" Workload="localhost-k8s-calico--kube--controllers--75cb744445--d9hht-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000309270), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-75cb744445-d9hht", "timestamp":"2025-03-17 18:43:29.277974368 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 18:43:29.362288 env[1309]: 2025-03-17 18:43:29.285 [INFO][5205] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:43:29.362288 env[1309]: 2025-03-17 18:43:29.303 [INFO][5205] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:43:29.362288 env[1309]: 2025-03-17 18:43:29.304 [INFO][5205] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 18:43:29.362288 env[1309]: 2025-03-17 18:43:29.305 [INFO][5205] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f8c62f297a29d2f13edc0dadd57bd1f527c8c2a5c208e13ec1091f9891176701" host="localhost" Mar 17 18:43:29.362288 env[1309]: 2025-03-17 18:43:29.309 [INFO][5205] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 18:43:29.362288 env[1309]: 2025-03-17 18:43:29.313 [INFO][5205] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 18:43:29.362288 env[1309]: 2025-03-17 18:43:29.319 [INFO][5205] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 18:43:29.362288 env[1309]: 2025-03-17 18:43:29.323 [INFO][5205] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 18:43:29.362288 env[1309]: 2025-03-17 18:43:29.323 [INFO][5205] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f8c62f297a29d2f13edc0dadd57bd1f527c8c2a5c208e13ec1091f9891176701" host="localhost" Mar 17 18:43:29.362288 env[1309]: 2025-03-17 18:43:29.324 [INFO][5205] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f8c62f297a29d2f13edc0dadd57bd1f527c8c2a5c208e13ec1091f9891176701 Mar 17 18:43:29.362288 env[1309]: 2025-03-17 18:43:29.328 [INFO][5205] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f8c62f297a29d2f13edc0dadd57bd1f527c8c2a5c208e13ec1091f9891176701" host="localhost" Mar 17 18:43:29.362288 env[1309]: 2025-03-17 18:43:29.333 [INFO][5205] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.f8c62f297a29d2f13edc0dadd57bd1f527c8c2a5c208e13ec1091f9891176701" host="localhost" Mar 17 18:43:29.362288 env[1309]: 2025-03-17 18:43:29.333 [INFO][5205] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.f8c62f297a29d2f13edc0dadd57bd1f527c8c2a5c208e13ec1091f9891176701" host="localhost" Mar 17 18:43:29.362288 env[1309]: 2025-03-17 18:43:29.333 [INFO][5205] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:43:29.362288 env[1309]: 2025-03-17 18:43:29.333 [INFO][5205] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="f8c62f297a29d2f13edc0dadd57bd1f527c8c2a5c208e13ec1091f9891176701" HandleID="k8s-pod-network.f8c62f297a29d2f13edc0dadd57bd1f527c8c2a5c208e13ec1091f9891176701" Workload="localhost-k8s-calico--kube--controllers--75cb744445--d9hht-eth0" Mar 17 18:43:29.363102 env[1309]: 2025-03-17 18:43:29.335 [INFO][5184] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f8c62f297a29d2f13edc0dadd57bd1f527c8c2a5c208e13ec1091f9891176701" Namespace="calico-system" Pod="calico-kube-controllers-75cb744445-d9hht" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75cb744445--d9hht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--75cb744445--d9hht-eth0", GenerateName:"calico-kube-controllers-75cb744445-", Namespace:"calico-system", SelfLink:"", UID:"7b65dbfa-20f5-4d4f-8c03-972a550ae421", ResourceVersion:"1171", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 42, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75cb744445", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-75cb744445-d9hht", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9f312783f28", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:43:29.363102 env[1309]: 2025-03-17 18:43:29.335 [INFO][5184] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="f8c62f297a29d2f13edc0dadd57bd1f527c8c2a5c208e13ec1091f9891176701" Namespace="calico-system" Pod="calico-kube-controllers-75cb744445-d9hht" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75cb744445--d9hht-eth0" Mar 17 18:43:29.363102 env[1309]: 2025-03-17 18:43:29.335 [INFO][5184] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9f312783f28 ContainerID="f8c62f297a29d2f13edc0dadd57bd1f527c8c2a5c208e13ec1091f9891176701" Namespace="calico-system" Pod="calico-kube-controllers-75cb744445-d9hht" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75cb744445--d9hht-eth0" Mar 17 18:43:29.363102 env[1309]: 2025-03-17 18:43:29.348 [INFO][5184] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f8c62f297a29d2f13edc0dadd57bd1f527c8c2a5c208e13ec1091f9891176701" Namespace="calico-system" Pod="calico-kube-controllers-75cb744445-d9hht" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75cb744445--d9hht-eth0" Mar 17 18:43:29.363102 env[1309]: 2025-03-17 18:43:29.349 [INFO][5184] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f8c62f297a29d2f13edc0dadd57bd1f527c8c2a5c208e13ec1091f9891176701" Namespace="calico-system" Pod="calico-kube-controllers-75cb744445-d9hht" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75cb744445--d9hht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--75cb744445--d9hht-eth0", GenerateName:"calico-kube-controllers-75cb744445-", Namespace:"calico-system", SelfLink:"", UID:"7b65dbfa-20f5-4d4f-8c03-972a550ae421", ResourceVersion:"1171", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 42, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75cb744445", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f8c62f297a29d2f13edc0dadd57bd1f527c8c2a5c208e13ec1091f9891176701", Pod:"calico-kube-controllers-75cb744445-d9hht", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9f312783f28", MAC:"c2:5b:b0:99:eb:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:43:29.363102 env[1309]: 2025-03-17 18:43:29.357 [INFO][5184] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f8c62f297a29d2f13edc0dadd57bd1f527c8c2a5c208e13ec1091f9891176701" Namespace="calico-system" Pod="calico-kube-controllers-75cb744445-d9hht" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75cb744445--d9hht-eth0" Mar 17 18:43:29.368000 audit[5282]: NETFILTER_CFG table=filter:110 family=2 entries=46 op=nft_register_chain pid=5282 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Mar 17 18:43:29.368000 audit[5282]: SYSCALL arch=c000003e syscall=46 success=yes exit=22204 a0=3 a1=7ffe2a74d310 a2=0 a3=7ffe2a74d2fc items=0 ppid=4571 pid=5282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:29.368000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Mar 17 18:43:29.370966 systemd-resolved[1224]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 18:43:29.375987 env[1309]: time="2025-03-17T18:43:29.375799963Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:43:29.375987 env[1309]: time="2025-03-17T18:43:29.375837675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:43:29.375987 env[1309]: time="2025-03-17T18:43:29.375847012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:43:29.376139 env[1309]: time="2025-03-17T18:43:29.376094943Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f8c62f297a29d2f13edc0dadd57bd1f527c8c2a5c208e13ec1091f9891176701 pid=5290 runtime=io.containerd.runc.v2 Mar 17 18:43:29.394636 env[1309]: time="2025-03-17T18:43:29.394601657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qvfwr,Uid:182af1a9-314d-44fe-9729-65e5c19acc5a,Namespace:kube-system,Attempt:1,} returns sandbox id \"51e3ac3348e8c59f85a8113331c9d188eac06bdad9caeaf7c62de6c82c4ca7dc\"" Mar 17 18:43:29.395054 kubelet[2216]: E0317 18:43:29.395033 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:29.397481 systemd-resolved[1224]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 18:43:29.397594 env[1309]: time="2025-03-17T18:43:29.397505406Z" level=info msg="CreateContainer within sandbox \"51e3ac3348e8c59f85a8113331c9d188eac06bdad9caeaf7c62de6c82c4ca7dc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:43:29.410851 env[1309]: time="2025-03-17T18:43:29.410816254Z" level=info msg="CreateContainer within sandbox \"51e3ac3348e8c59f85a8113331c9d188eac06bdad9caeaf7c62de6c82c4ca7dc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4b332c69c0675dbd6c205fffc2b472fb866d0d1636379b9614a25078e3f5bfc1\"" Mar 17 18:43:29.411538 env[1309]: time="2025-03-17T18:43:29.411521675Z" level=info msg="StartContainer for \"4b332c69c0675dbd6c205fffc2b472fb866d0d1636379b9614a25078e3f5bfc1\"" Mar 17 18:43:29.420947 env[1309]: time="2025-03-17T18:43:29.420922291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75cb744445-d9hht,Uid:7b65dbfa-20f5-4d4f-8c03-972a550ae421,Namespace:calico-system,Attempt:1,} returns sandbox id \"f8c62f297a29d2f13edc0dadd57bd1f527c8c2a5c208e13ec1091f9891176701\"" Mar 17 18:43:29.458071 env[1309]: time="2025-03-17T18:43:29.458023395Z" level=info msg="StartContainer for \"4b332c69c0675dbd6c205fffc2b472fb866d0d1636379b9614a25078e3f5bfc1\" returns successfully" Mar 17 18:43:29.937337 systemd[1]: run-netns-cni\x2daa7ca881\x2d1297\x2d41f9\x2d62ac\x2d2edf1e93d7cf.mount: Deactivated successfully. Mar 17 18:43:29.937497 systemd[1]: run-netns-cni\x2d342fa1c3\x2d5b01\x2d2b0d\x2dd29b\x2d47dab4c09e2d.mount: Deactivated successfully. Mar 17 18:43:30.042665 kubelet[2216]: E0317 18:43:30.042634 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:30.052376 kubelet[2216]: I0317 18:43:30.052287 2216 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-qvfwr" podStartSLOduration=80.052269414 podStartE2EDuration="1m20.052269414s" podCreationTimestamp="2025-03-17 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:43:30.05193565 +0000 UTC m=+95.015504706" watchObservedRunningTime="2025-03-17 18:43:30.052269414 +0000 UTC m=+95.015838450" Mar 17 18:43:30.062000 audit[5373]: NETFILTER_CFG table=filter:111 family=2 entries=14 op=nft_register_rule pid=5373 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:43:30.062000 audit[5373]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffe214de260 a2=0 a3=7ffe214de24c items=0 ppid=2420 pid=5373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:30.062000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:43:30.068000 audit[5373]: NETFILTER_CFG table=nat:112 family=2 entries=16 op=nft_register_rule pid=5373 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:43:30.068000 audit[5373]: SYSCALL arch=c000003e syscall=46 success=yes exit=4236 a0=3 a1=7ffe214de260 a2=0 a3=0 items=0 ppid=2420 pid=5373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:30.068000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:43:30.079000 audit[5375]: NETFILTER_CFG table=filter:113 family=2 entries=11 op=nft_register_rule pid=5375 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:43:30.079000 audit[5375]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffc452fa900 a2=0 a3=7ffc452fa8ec items=0 ppid=2420 pid=5375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:30.079000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:43:30.085000 audit[5375]: NETFILTER_CFG table=nat:114 family=2 entries=37 op=nft_register_chain pid=5375 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:43:30.085000 audit[5375]: SYSCALL arch=c000003e syscall=46 success=yes exit=14964 a0=3 a1=7ffc452fa900 a2=0 a3=7ffc452fa8ec items=0 ppid=2420 pid=5375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:30.085000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:43:30.851113 systemd-networkd[1085]: cali10dbb1f0f29: Gained IPv6LL Mar 17 18:43:31.046090 kubelet[2216]: E0317 18:43:31.046055 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:31.117707 env[1309]: time="2025-03-17T18:43:31.117588418Z" level=info msg="StopPodSandbox for \"2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035\"" Mar 17 18:43:31.236977 systemd-networkd[1085]: cali9f312783f28: Gained IPv6LL Mar 17 18:43:31.330560 env[1309]: 2025-03-17 18:43:31.293 [INFO][5393] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035" Mar 17 18:43:31.330560 env[1309]: 2025-03-17 18:43:31.297 [INFO][5393] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035" iface="eth0" netns="/var/run/netns/cni-31ac50c7-d246-4a45-bc7a-08d77b373706" Mar 17 18:43:31.330560 env[1309]: 2025-03-17 18:43:31.297 [INFO][5393] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035" iface="eth0" netns="/var/run/netns/cni-31ac50c7-d246-4a45-bc7a-08d77b373706" Mar 17 18:43:31.330560 env[1309]: 2025-03-17 18:43:31.297 [INFO][5393] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035" iface="eth0" netns="/var/run/netns/cni-31ac50c7-d246-4a45-bc7a-08d77b373706" Mar 17 18:43:31.330560 env[1309]: 2025-03-17 18:43:31.297 [INFO][5393] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035" Mar 17 18:43:31.330560 env[1309]: 2025-03-17 18:43:31.297 [INFO][5393] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035" Mar 17 18:43:31.330560 env[1309]: 2025-03-17 18:43:31.321 [INFO][5401] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035" HandleID="k8s-pod-network.2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035" Workload="localhost-k8s-coredns--7db6d8ff4d--k6vcp-eth0" Mar 17 18:43:31.330560 env[1309]: 2025-03-17 18:43:31.321 [INFO][5401] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:43:31.330560 env[1309]: 2025-03-17 18:43:31.321 [INFO][5401] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:43:31.330560 env[1309]: 2025-03-17 18:43:31.326 [WARNING][5401] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035" HandleID="k8s-pod-network.2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035" Workload="localhost-k8s-coredns--7db6d8ff4d--k6vcp-eth0" Mar 17 18:43:31.330560 env[1309]: 2025-03-17 18:43:31.326 [INFO][5401] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035" HandleID="k8s-pod-network.2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035" Workload="localhost-k8s-coredns--7db6d8ff4d--k6vcp-eth0" Mar 17 18:43:31.330560 env[1309]: 2025-03-17 18:43:31.327 [INFO][5401] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:43:31.330560 env[1309]: 2025-03-17 18:43:31.328 [INFO][5393] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035" Mar 17 18:43:31.334059 systemd[1]: run-netns-cni\x2d31ac50c7\x2dd246\x2d4a45\x2dbc7a\x2d08d77b373706.mount: Deactivated successfully. Mar 17 18:43:31.335016 env[1309]: time="2025-03-17T18:43:31.334970843Z" level=info msg="TearDown network for sandbox \"2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035\" successfully" Mar 17 18:43:31.335016 env[1309]: time="2025-03-17T18:43:31.335006982Z" level=info msg="StopPodSandbox for \"2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035\" returns successfully" Mar 17 18:43:31.335315 kubelet[2216]: E0317 18:43:31.335293 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:31.335727 env[1309]: time="2025-03-17T18:43:31.335689298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-k6vcp,Uid:5a7c78d5-a1ee-4010-a9a6-a696fb4c1f04,Namespace:kube-system,Attempt:1,}" Mar 17 18:43:31.506950 env[1309]: time="2025-03-17T18:43:31.506883323Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:31.511451 env[1309]: time="2025-03-17T18:43:31.511405242Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:31.514408 env[1309]: time="2025-03-17T18:43:31.514379211Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:31.517554 env[1309]: time="2025-03-17T18:43:31.517522270Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:31.518168 env[1309]: time="2025-03-17T18:43:31.518141778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Mar 17 18:43:31.520193 env[1309]: time="2025-03-17T18:43:31.520146896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Mar 17 18:43:31.521005 env[1309]: time="2025-03-17T18:43:31.520968757Z" level=info msg="CreateContainer within sandbox \"44575608180746b3fb7c70f760365993b7a02fc995b9d444470651f0f3d78d4f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 17 18:43:31.526471 systemd-networkd[1085]: calib798aa684fc: Link UP Mar 17 18:43:31.530724 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Mar 17 18:43:31.530802 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calib798aa684fc: link becomes ready Mar 17 18:43:31.530422 systemd-networkd[1085]: calib798aa684fc: Gained carrier Mar 17 18:43:31.539905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3612643982.mount: Deactivated successfully. Mar 17 18:43:31.543699 env[1309]: 2025-03-17 18:43:31.449 [INFO][5410] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--k6vcp-eth0 coredns-7db6d8ff4d- kube-system 5a7c78d5-a1ee-4010-a9a6-a696fb4c1f04 1205 0 2025-03-17 18:42:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-k6vcp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib798aa684fc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2b8eaf4ef64969cc2de05dc0f54f766d8b240d265a02f7eacf4c1d1f2b60d109" Namespace="kube-system" Pod="coredns-7db6d8ff4d-k6vcp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--k6vcp-" Mar 17 18:43:31.543699 env[1309]: 2025-03-17 18:43:31.449 [INFO][5410] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2b8eaf4ef64969cc2de05dc0f54f766d8b240d265a02f7eacf4c1d1f2b60d109" Namespace="kube-system" Pod="coredns-7db6d8ff4d-k6vcp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--k6vcp-eth0" Mar 17 18:43:31.543699 env[1309]: 2025-03-17 18:43:31.482 [INFO][5423] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2b8eaf4ef64969cc2de05dc0f54f766d8b240d265a02f7eacf4c1d1f2b60d109" HandleID="k8s-pod-network.2b8eaf4ef64969cc2de05dc0f54f766d8b240d265a02f7eacf4c1d1f2b60d109" Workload="localhost-k8s-coredns--7db6d8ff4d--k6vcp-eth0" Mar 17 18:43:31.543699 env[1309]: 2025-03-17 18:43:31.490 [INFO][5423] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2b8eaf4ef64969cc2de05dc0f54f766d8b240d265a02f7eacf4c1d1f2b60d109" HandleID="k8s-pod-network.2b8eaf4ef64969cc2de05dc0f54f766d8b240d265a02f7eacf4c1d1f2b60d109" Workload="localhost-k8s-coredns--7db6d8ff4d--k6vcp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003684f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-k6vcp", "timestamp":"2025-03-17 18:43:31.482951427 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 18:43:31.543699 env[1309]: 2025-03-17 18:43:31.490 [INFO][5423] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:43:31.543699 env[1309]: 2025-03-17 18:43:31.490 [INFO][5423] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:43:31.543699 env[1309]: 2025-03-17 18:43:31.490 [INFO][5423] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 18:43:31.543699 env[1309]: 2025-03-17 18:43:31.492 [INFO][5423] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2b8eaf4ef64969cc2de05dc0f54f766d8b240d265a02f7eacf4c1d1f2b60d109" host="localhost" Mar 17 18:43:31.543699 env[1309]: 2025-03-17 18:43:31.496 [INFO][5423] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 18:43:31.543699 env[1309]: 2025-03-17 18:43:31.500 [INFO][5423] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 18:43:31.543699 env[1309]: 2025-03-17 18:43:31.501 [INFO][5423] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 18:43:31.543699 env[1309]: 2025-03-17 18:43:31.503 [INFO][5423] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 18:43:31.543699 env[1309]: 2025-03-17 18:43:31.503 [INFO][5423] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2b8eaf4ef64969cc2de05dc0f54f766d8b240d265a02f7eacf4c1d1f2b60d109" host="localhost" Mar 17 18:43:31.543699 env[1309]: 2025-03-17 18:43:31.508 [INFO][5423] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2b8eaf4ef64969cc2de05dc0f54f766d8b240d265a02f7eacf4c1d1f2b60d109 Mar 17 18:43:31.543699 env[1309]: 2025-03-17 18:43:31.512 [INFO][5423] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2b8eaf4ef64969cc2de05dc0f54f766d8b240d265a02f7eacf4c1d1f2b60d109" host="localhost" Mar 17 18:43:31.543699 env[1309]: 2025-03-17 18:43:31.517 [INFO][5423] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.2b8eaf4ef64969cc2de05dc0f54f766d8b240d265a02f7eacf4c1d1f2b60d109" host="localhost" Mar 17 18:43:31.543699 env[1309]: 2025-03-17 18:43:31.518 [INFO][5423] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.2b8eaf4ef64969cc2de05dc0f54f766d8b240d265a02f7eacf4c1d1f2b60d109" host="localhost" Mar 17 18:43:31.543699 env[1309]: 2025-03-17 18:43:31.518 [INFO][5423] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:43:31.543699 env[1309]: 2025-03-17 18:43:31.518 [INFO][5423] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="2b8eaf4ef64969cc2de05dc0f54f766d8b240d265a02f7eacf4c1d1f2b60d109" HandleID="k8s-pod-network.2b8eaf4ef64969cc2de05dc0f54f766d8b240d265a02f7eacf4c1d1f2b60d109" Workload="localhost-k8s-coredns--7db6d8ff4d--k6vcp-eth0" Mar 17 18:43:31.544274 env[1309]: 2025-03-17 18:43:31.520 [INFO][5410] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2b8eaf4ef64969cc2de05dc0f54f766d8b240d265a02f7eacf4c1d1f2b60d109" Namespace="kube-system" Pod="coredns-7db6d8ff4d-k6vcp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--k6vcp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--k6vcp-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5a7c78d5-a1ee-4010-a9a6-a696fb4c1f04", ResourceVersion:"1205", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 42, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-k6vcp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib798aa684fc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:43:31.544274 env[1309]: 2025-03-17 18:43:31.520 [INFO][5410] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="2b8eaf4ef64969cc2de05dc0f54f766d8b240d265a02f7eacf4c1d1f2b60d109" Namespace="kube-system" Pod="coredns-7db6d8ff4d-k6vcp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--k6vcp-eth0" Mar 17 18:43:31.544274 env[1309]: 2025-03-17 18:43:31.521 [INFO][5410] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib798aa684fc ContainerID="2b8eaf4ef64969cc2de05dc0f54f766d8b240d265a02f7eacf4c1d1f2b60d109" Namespace="kube-system" Pod="coredns-7db6d8ff4d-k6vcp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--k6vcp-eth0" Mar 17 18:43:31.544274 env[1309]: 2025-03-17 18:43:31.531 [INFO][5410] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2b8eaf4ef64969cc2de05dc0f54f766d8b240d265a02f7eacf4c1d1f2b60d109" Namespace="kube-system" Pod="coredns-7db6d8ff4d-k6vcp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--k6vcp-eth0" Mar 17 18:43:31.544274 env[1309]: 2025-03-17 18:43:31.531 [INFO][5410] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2b8eaf4ef64969cc2de05dc0f54f766d8b240d265a02f7eacf4c1d1f2b60d109" Namespace="kube-system" Pod="coredns-7db6d8ff4d-k6vcp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--k6vcp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--k6vcp-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5a7c78d5-a1ee-4010-a9a6-a696fb4c1f04", ResourceVersion:"1205", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 42, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2b8eaf4ef64969cc2de05dc0f54f766d8b240d265a02f7eacf4c1d1f2b60d109", Pod:"coredns-7db6d8ff4d-k6vcp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib798aa684fc", MAC:"6e:ac:e2:3a:54:d0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:43:31.544274 env[1309]: 2025-03-17 18:43:31.542 [INFO][5410] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2b8eaf4ef64969cc2de05dc0f54f766d8b240d265a02f7eacf4c1d1f2b60d109" Namespace="kube-system" Pod="coredns-7db6d8ff4d-k6vcp" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--k6vcp-eth0" Mar 17 18:43:31.546776 env[1309]: time="2025-03-17T18:43:31.546743083Z" level=info msg="CreateContainer within sandbox \"44575608180746b3fb7c70f760365993b7a02fc995b9d444470651f0f3d78d4f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7c08d99f5d689e29b7241a6edbe33bfe8c72308d14865447c3658fde1ceaa447\"" Mar 17 18:43:31.547513 env[1309]: time="2025-03-17T18:43:31.547488458Z" level=info msg="StartContainer for \"7c08d99f5d689e29b7241a6edbe33bfe8c72308d14865447c3658fde1ceaa447\"" Mar 17 18:43:31.552000 audit[5450]: NETFILTER_CFG table=filter:115 family=2 entries=46 op=nft_register_chain pid=5450 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Mar 17 18:43:31.552000 audit[5450]: SYSCALL arch=c000003e syscall=46 success=yes exit=21784 a0=3 a1=7fff363ffee0 a2=0 a3=7fff363ffecc items=0 ppid=4571 pid=5450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:31.552000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Mar 17 18:43:31.558621 env[1309]: time="2025-03-17T18:43:31.558463073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:43:31.558621 env[1309]: time="2025-03-17T18:43:31.558498871Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:43:31.558621 env[1309]: time="2025-03-17T18:43:31.558508069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:43:31.558964 env[1309]: time="2025-03-17T18:43:31.558917827Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2b8eaf4ef64969cc2de05dc0f54f766d8b240d265a02f7eacf4c1d1f2b60d109 pid=5467 runtime=io.containerd.runc.v2 Mar 17 18:43:31.589199 systemd-resolved[1224]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 18:43:31.614542 env[1309]: time="2025-03-17T18:43:31.614505070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-k6vcp,Uid:5a7c78d5-a1ee-4010-a9a6-a696fb4c1f04,Namespace:kube-system,Attempt:1,} returns sandbox id \"2b8eaf4ef64969cc2de05dc0f54f766d8b240d265a02f7eacf4c1d1f2b60d109\"" Mar 17 18:43:31.615323 kubelet[2216]: E0317 18:43:31.615296 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:31.617479 env[1309]: time="2025-03-17T18:43:31.617442810Z" level=info msg="CreateContainer within sandbox \"2b8eaf4ef64969cc2de05dc0f54f766d8b240d265a02f7eacf4c1d1f2b60d109\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:43:31.681361 env[1309]: time="2025-03-17T18:43:31.681285201Z" level=info msg="StartContainer for \"7c08d99f5d689e29b7241a6edbe33bfe8c72308d14865447c3658fde1ceaa447\" returns successfully" Mar 17 18:43:31.693427 env[1309]: time="2025-03-17T18:43:31.693375877Z" level=info msg="CreateContainer within sandbox \"2b8eaf4ef64969cc2de05dc0f54f766d8b240d265a02f7eacf4c1d1f2b60d109\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"77c4c81e636646b8b7692b67609ce64df04044dee5002a8d9a1985c6c0c97e89\"" Mar 17 18:43:31.694044 env[1309]: time="2025-03-17T18:43:31.694003869Z" level=info msg="StartContainer for \"77c4c81e636646b8b7692b67609ce64df04044dee5002a8d9a1985c6c0c97e89\"" Mar 17 18:43:31.738583 env[1309]: time="2025-03-17T18:43:31.738532466Z" level=info msg="StartContainer for \"77c4c81e636646b8b7692b67609ce64df04044dee5002a8d9a1985c6c0c97e89\" returns successfully" Mar 17 18:43:31.746908 kernel: kauditd_printk_skb: 25 callbacks suppressed Mar 17 18:43:31.747024 kernel: audit: type=1130 audit(1742237011.742:508): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.81:22-10.0.0.1:40448 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:43:31.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.81:22-10.0.0.1:40448 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:43:31.743441 systemd[1]: Started sshd@20-10.0.0.81:22-10.0.0.1:40448.service. Mar 17 18:43:31.785000 audit[5561]: USER_ACCT pid=5561 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:31.786963 sshd[5561]: Accepted publickey for core from 10.0.0.1 port 40448 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:43:31.790914 kernel: audit: type=1101 audit(1742237011.785:509): pid=5561 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:31.790000 audit[5561]: CRED_ACQ pid=5561 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:31.791620 sshd[5561]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:31.797477 kernel: audit: type=1103 audit(1742237011.790:510): pid=5561 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:31.797614 kernel: audit: type=1006 audit(1742237011.790:511): pid=5561 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Mar 17 18:43:31.796555 systemd-logind[1293]: New session 21 of user core. Mar 17 18:43:31.797218 systemd[1]: Started session-21.scope. Mar 17 18:43:31.790000 audit[5561]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffce683670 a2=3 a3=0 items=0 ppid=1 pid=5561 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:31.790000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:43:31.803608 kernel: audit: type=1300 audit(1742237011.790:511): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffce683670 a2=3 a3=0 items=0 ppid=1 pid=5561 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:31.803647 kernel: audit: type=1327 audit(1742237011.790:511): proctitle=737368643A20636F7265205B707269765D Mar 17 18:43:31.801000 audit[5561]: USER_START pid=5561 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:31.807904 kernel: audit: type=1105 audit(1742237011.801:512): pid=5561 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:31.808004 kernel: audit: type=1103 audit(1742237011.802:513): pid=5568 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:31.802000 audit[5568]: CRED_ACQ pid=5568 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:31.921776 sshd[5561]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:31.922000 audit[5561]: USER_END pid=5561 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:31.924688 systemd[1]: sshd@20-10.0.0.81:22-10.0.0.1:40448.service: Deactivated successfully. Mar 17 18:43:31.925636 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 18:43:31.926096 systemd-logind[1293]: Session 21 logged out. Waiting for processes to exit. Mar 17 18:43:31.926777 systemd-logind[1293]: Removed session 21. Mar 17 18:43:31.922000 audit[5561]: CRED_DISP pid=5561 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:31.930867 kernel: audit: type=1106 audit(1742237011.922:514): pid=5561 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:31.930928 kernel: audit: type=1104 audit(1742237011.922:515): pid=5561 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:31.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.81:22-10.0.0.1:40448 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:43:32.050280 kubelet[2216]: E0317 18:43:32.049983 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:32.053079 kubelet[2216]: E0317 18:43:32.052994 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:32.069000 audit[5580]: NETFILTER_CFG table=filter:116 family=2 entries=8 op=nft_register_rule pid=5580 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:43:32.069000 audit[5580]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffca6d02010 a2=0 a3=7ffca6d01ffc items=0 ppid=2420 pid=5580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:32.069000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:43:32.076923 kubelet[2216]: I0317 18:43:32.076287 2216 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-rwbh5" podStartSLOduration=71.921368558 podStartE2EDuration="1m16.076269551s" podCreationTimestamp="2025-03-17 18:42:16 +0000 UTC" firstStartedPulling="2025-03-17 18:43:27.364360682 +0000 UTC m=+92.327929728" lastFinishedPulling="2025-03-17 18:43:31.519261675 +0000 UTC m=+96.482830721" observedRunningTime="2025-03-17 18:43:32.07625848 +0000 UTC m=+97.039827526" watchObservedRunningTime="2025-03-17 18:43:32.076269551 +0000 UTC m=+97.039838597" Mar 17 18:43:32.076923 kubelet[2216]: I0317 18:43:32.076439 2216 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-k6vcp" podStartSLOduration=82.076434915 podStartE2EDuration="1m22.076434915s" podCreationTimestamp="2025-03-17 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:43:32.063664686 +0000 UTC m=+97.027233732" watchObservedRunningTime="2025-03-17 18:43:32.076434915 +0000 UTC m=+97.040003961" Mar 17 18:43:32.077000 audit[5580]: NETFILTER_CFG table=nat:117 family=2 entries=46 op=nft_register_rule pid=5580 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:43:32.077000 audit[5580]: SYSCALL arch=c000003e syscall=46 success=yes exit=14964 a0=3 a1=7ffca6d02010 a2=0 a3=7ffca6d01ffc items=0 ppid=2420 pid=5580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:32.077000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:43:32.094000 audit[5582]: NETFILTER_CFG table=filter:118 family=2 entries=8 op=nft_register_rule pid=5582 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:43:32.094000 audit[5582]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffda608fb60 a2=0 a3=7ffda608fb4c items=0 ppid=2420 pid=5582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:32.094000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:43:32.099000 audit[5582]: NETFILTER_CFG table=nat:119 family=2 entries=58 op=nft_register_chain pid=5582 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:43:32.099000 audit[5582]: SYSCALL arch=c000003e syscall=46 success=yes exit=20628 a0=3 a1=7ffda608fb60 a2=0 a3=7ffda608fb4c items=0 ppid=2420 pid=5582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:32.099000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:43:32.218983 kubelet[2216]: I0317 18:43:32.218947 2216 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 17 18:43:32.218983 kubelet[2216]: I0317 18:43:32.218976 2216 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 17 18:43:32.963026 systemd-networkd[1085]: calib798aa684fc: Gained IPv6LL Mar 17 18:43:33.058473 kubelet[2216]: E0317 18:43:33.058440 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:34.060652 kubelet[2216]: E0317 18:43:34.060621 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:34.401507 env[1309]: time="2025-03-17T18:43:34.401388463Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:34.403318 env[1309]: time="2025-03-17T18:43:34.403270224Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:34.404810 env[1309]: time="2025-03-17T18:43:34.404756234Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:34.406352 env[1309]: time="2025-03-17T18:43:34.406312467Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:34.406896 env[1309]: time="2025-03-17T18:43:34.406800633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Mar 17 18:43:34.414993 env[1309]: time="2025-03-17T18:43:34.414942756Z" level=info msg="CreateContainer within sandbox \"f8c62f297a29d2f13edc0dadd57bd1f527c8c2a5c208e13ec1091f9891176701\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 17 18:43:34.430057 env[1309]: time="2025-03-17T18:43:34.429999499Z" level=info msg="CreateContainer within sandbox \"f8c62f297a29d2f13edc0dadd57bd1f527c8c2a5c208e13ec1091f9891176701\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ea62c24af54b2fa998aa3c7ccdb34d453674e114c414643ecd7dd961746a4fd6\"" Mar 17 18:43:34.430572 env[1309]: time="2025-03-17T18:43:34.430542540Z" level=info msg="StartContainer for \"ea62c24af54b2fa998aa3c7ccdb34d453674e114c414643ecd7dd961746a4fd6\"" Mar 17 18:43:34.488631 env[1309]: time="2025-03-17T18:43:34.488556768Z" level=info msg="StartContainer for \"ea62c24af54b2fa998aa3c7ccdb34d453674e114c414643ecd7dd961746a4fd6\" returns successfully" Mar 17 18:43:35.073618 kubelet[2216]: I0317 18:43:35.073543 2216 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-75cb744445-d9hht" podStartSLOduration=74.088157421 podStartE2EDuration="1m19.073505203s" podCreationTimestamp="2025-03-17 18:42:16 +0000 UTC" firstStartedPulling="2025-03-17 18:43:29.422214357 +0000 UTC m=+94.385783403" lastFinishedPulling="2025-03-17 18:43:34.407562129 +0000 UTC m=+99.371131185" observedRunningTime="2025-03-17 18:43:35.072531136 +0000 UTC m=+100.036100182" watchObservedRunningTime="2025-03-17 18:43:35.073505203 +0000 UTC m=+100.037074269" Mar 17 18:43:35.117490 kubelet[2216]: E0317 18:43:35.117411 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:36.925445 systemd[1]: Started sshd@21-10.0.0.81:22-10.0.0.1:38302.service. Mar 17 18:43:36.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.81:22-10.0.0.1:38302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:43:36.927007 kernel: kauditd_printk_skb: 13 callbacks suppressed Mar 17 18:43:36.927070 kernel: audit: type=1130 audit(1742237016.923:521): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.81:22-10.0.0.1:38302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:43:36.956000 audit[5661]: USER_ACCT pid=5661 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:36.958169 sshd[5661]: Accepted publickey for core from 10.0.0.1 port 38302 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:43:36.960000 audit[5661]: CRED_ACQ pid=5661 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:36.963072 sshd[5661]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:36.966601 systemd-logind[1293]: New session 22 of user core. Mar 17 18:43:36.967320 kernel: audit: type=1101 audit(1742237016.956:522): pid=5661 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:36.967375 kernel: audit: type=1103 audit(1742237016.960:523): pid=5661 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:36.967535 kernel: audit: type=1006 audit(1742237016.960:524): pid=5661 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Mar 17 18:43:36.967664 systemd[1]: Started session-22.scope. Mar 17 18:43:36.972378 kernel: audit: type=1300 audit(1742237016.960:524): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff05df5830 a2=3 a3=0 items=0 ppid=1 pid=5661 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:36.960000 audit[5661]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff05df5830 a2=3 a3=0 items=0 ppid=1 pid=5661 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:36.960000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:43:36.977917 kernel: audit: type=1327 audit(1742237016.960:524): proctitle=737368643A20636F7265205B707269765D Mar 17 18:43:36.977990 kernel: audit: type=1105 audit(1742237016.970:525): pid=5661 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:36.970000 audit[5661]: USER_START pid=5661 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:36.982485 kernel: audit: type=1103 audit(1742237016.972:526): pid=5664 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:36.972000 audit[5664]: CRED_ACQ pid=5664 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:37.092665 sshd[5661]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:37.092000 audit[5661]: USER_END pid=5661 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:37.095276 systemd[1]: Started sshd@22-10.0.0.81:22-10.0.0.1:38308.service. Mar 17 18:43:37.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.81:22-10.0.0.1:38308 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:43:37.100834 systemd[1]: sshd@21-10.0.0.81:22-10.0.0.1:38302.service: Deactivated successfully. Mar 17 18:43:37.102000 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 18:43:37.102823 systemd-logind[1293]: Session 22 logged out. Waiting for processes to exit. Mar 17 18:43:37.103620 kernel: audit: type=1106 audit(1742237017.092:527): pid=5661 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:37.103683 kernel: audit: type=1130 audit(1742237017.097:528): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.81:22-10.0.0.1:38308 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:43:37.097000 audit[5661]: CRED_DISP pid=5661 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:37.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.81:22-10.0.0.1:38302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:43:37.103647 systemd-logind[1293]: Removed session 22. Mar 17 18:43:37.125000 audit[5673]: USER_ACCT pid=5673 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:37.128084 sshd[5673]: Accepted publickey for core from 10.0.0.1 port 38308 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:43:37.127000 audit[5673]: CRED_ACQ pid=5673 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:37.127000 audit[5673]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffea5af74a0 a2=3 a3=0 items=0 ppid=1 pid=5673 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:37.127000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:43:37.129429 sshd[5673]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:37.133498 systemd-logind[1293]: New session 23 of user core. Mar 17 18:43:37.134596 systemd[1]: Started session-23.scope. Mar 17 18:43:37.138000 audit[5673]: USER_START pid=5673 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:37.139000 audit[5678]: CRED_ACQ pid=5678 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:37.320011 sshd[5673]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:37.321000 audit[5673]: USER_END pid=5673 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:37.321000 audit[5673]: CRED_DISP pid=5673 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:37.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.81:22-10.0.0.1:38316 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:43:37.323305 systemd[1]: Started sshd@23-10.0.0.81:22-10.0.0.1:38316.service. Mar 17 18:43:37.324739 systemd[1]: sshd@22-10.0.0.81:22-10.0.0.1:38308.service: Deactivated successfully. Mar 17 18:43:37.323000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.81:22-10.0.0.1:38308 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:43:37.325842 systemd-logind[1293]: Session 23 logged out. Waiting for processes to exit. Mar 17 18:43:37.325918 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 18:43:37.326885 systemd-logind[1293]: Removed session 23. Mar 17 18:43:37.354000 audit[5686]: USER_ACCT pid=5686 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:37.356592 sshd[5686]: Accepted publickey for core from 10.0.0.1 port 38316 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:43:37.355000 audit[5686]: CRED_ACQ pid=5686 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:37.355000 audit[5686]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe6d94cfe0 a2=3 a3=0 items=0 ppid=1 pid=5686 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:37.355000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:43:37.357369 sshd[5686]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:37.361189 systemd-logind[1293]: New session 24 of user core. Mar 17 18:43:37.361832 systemd[1]: Started session-24.scope. Mar 17 18:43:37.365000 audit[5686]: USER_START pid=5686 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:37.366000 audit[5691]: CRED_ACQ pid=5691 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:38.800000 audit[5703]: NETFILTER_CFG table=filter:120 family=2 entries=20 op=nft_register_rule pid=5703 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:43:38.800000 audit[5703]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffe889c25a0 a2=0 a3=7ffe889c258c items=0 ppid=2420 pid=5703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:38.800000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:43:38.808000 audit[5703]: NETFILTER_CFG table=nat:121 family=2 entries=22 op=nft_register_rule pid=5703 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:43:38.808000 audit[5703]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffe889c25a0 a2=0 a3=0 items=0 ppid=2420 pid=5703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:38.808000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:43:38.826000 audit[5705]: NETFILTER_CFG table=filter:122 family=2 entries=32 op=nft_register_rule pid=5705 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:43:38.826000 audit[5705]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffe51fffd70 a2=0 a3=7ffe51fffd5c items=0 ppid=2420 pid=5705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:38.826000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:43:38.829497 sshd[5686]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:38.829000 audit[5686]: USER_END pid=5686 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:38.829000 audit[5686]: CRED_DISP pid=5686 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:38.832016 systemd[1]: Started sshd@24-10.0.0.81:22-10.0.0.1:38322.service. Mar 17 18:43:38.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.81:22-10.0.0.1:38322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:43:38.836126 systemd[1]: sshd@23-10.0.0.81:22-10.0.0.1:38316.service: Deactivated successfully. Mar 17 18:43:38.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.81:22-10.0.0.1:38316 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:43:38.837269 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 18:43:38.835000 audit[5705]: NETFILTER_CFG table=nat:123 family=2 entries=22 op=nft_register_rule pid=5705 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:43:38.837897 systemd-logind[1293]: Session 24 logged out. Waiting for processes to exit. Mar 17 18:43:38.835000 audit[5705]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffe51fffd70 a2=0 a3=0 items=0 ppid=2420 pid=5705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:38.835000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:43:38.839121 systemd-logind[1293]: Removed session 24. Mar 17 18:43:38.862000 audit[5706]: USER_ACCT pid=5706 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:38.864653 sshd[5706]: Accepted publickey for core from 10.0.0.1 port 38322 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:43:38.863000 audit[5706]: CRED_ACQ pid=5706 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:38.863000 audit[5706]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdc652a7c0 a2=3 a3=0 items=0 ppid=1 pid=5706 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:38.863000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:43:38.865731 sshd[5706]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:38.870406 systemd[1]: Started session-25.scope. Mar 17 18:43:38.870720 systemd-logind[1293]: New session 25 of user core. Mar 17 18:43:38.873000 audit[5706]: USER_START pid=5706 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:38.874000 audit[5711]: CRED_ACQ pid=5711 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:39.150144 sshd[5706]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:39.149000 audit[5706]: USER_END pid=5706 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:39.149000 audit[5706]: CRED_DISP pid=5706 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:39.152566 systemd[1]: Started sshd@25-10.0.0.81:22-10.0.0.1:38328.service. Mar 17 18:43:39.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.81:22-10.0.0.1:38328 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:43:39.158208 systemd[1]: sshd@24-10.0.0.81:22-10.0.0.1:38322.service: Deactivated successfully. Mar 17 18:43:39.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.81:22-10.0.0.1:38322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:43:39.158961 systemd[1]: session-25.scope: Deactivated successfully. Mar 17 18:43:39.159542 systemd-logind[1293]: Session 25 logged out. Waiting for processes to exit. Mar 17 18:43:39.160399 systemd-logind[1293]: Removed session 25. Mar 17 18:43:39.181000 audit[5718]: USER_ACCT pid=5718 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:39.183769 sshd[5718]: Accepted publickey for core from 10.0.0.1 port 38328 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:43:39.182000 audit[5718]: CRED_ACQ pid=5718 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:39.182000 audit[5718]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd1d1825d0 a2=3 a3=0 items=0 ppid=1 pid=5718 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:39.182000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:43:39.184771 sshd[5718]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:39.188681 systemd-logind[1293]: New session 26 of user core. Mar 17 18:43:39.189474 systemd[1]: Started session-26.scope. Mar 17 18:43:39.193000 audit[5718]: USER_START pid=5718 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:39.194000 audit[5723]: CRED_ACQ pid=5723 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:39.326427 sshd[5718]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:39.325000 audit[5718]: USER_END pid=5718 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:39.325000 audit[5718]: CRED_DISP pid=5718 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:39.328795 systemd[1]: sshd@25-10.0.0.81:22-10.0.0.1:38328.service: Deactivated successfully. Mar 17 18:43:39.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.81:22-10.0.0.1:38328 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:43:39.329718 systemd-logind[1293]: Session 26 logged out. Waiting for processes to exit. Mar 17 18:43:39.329727 systemd[1]: session-26.scope: Deactivated successfully. Mar 17 18:43:39.330436 systemd-logind[1293]: Removed session 26. Mar 17 18:43:44.329413 systemd[1]: Started sshd@26-10.0.0.81:22-10.0.0.1:38340.service. Mar 17 18:43:44.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.81:22-10.0.0.1:38340 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:43:44.332883 kernel: kauditd_printk_skb: 57 callbacks suppressed Mar 17 18:43:44.332947 kernel: audit: type=1130 audit(1742237024.328:570): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.81:22-10.0.0.1:38340 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:43:44.357000 audit[5745]: USER_ACCT pid=5745 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:44.358719 sshd[5745]: Accepted publickey for core from 10.0.0.1 port 38340 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:43:44.361371 sshd[5745]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:44.360000 audit[5745]: CRED_ACQ pid=5745 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:44.364983 systemd-logind[1293]: New session 27 of user core. Mar 17 18:43:44.365802 systemd[1]: Started session-27.scope. Mar 17 18:43:44.365998 kernel: audit: type=1101 audit(1742237024.357:571): pid=5745 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:44.366045 kernel: audit: type=1103 audit(1742237024.360:572): pid=5745 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:44.366114 kernel: audit: type=1006 audit(1742237024.360:573): pid=5745 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Mar 17 18:43:44.360000 audit[5745]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe80c8fdb0 a2=3 a3=0 items=0 ppid=1 pid=5745 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:44.372129 kernel: audit: type=1300 audit(1742237024.360:573): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe80c8fdb0 a2=3 a3=0 items=0 ppid=1 pid=5745 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:44.372198 kernel: audit: type=1327 audit(1742237024.360:573): proctitle=737368643A20636F7265205B707269765D Mar 17 18:43:44.360000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:43:44.373526 kernel: audit: type=1105 audit(1742237024.370:574): pid=5745 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:44.370000 audit[5745]: USER_START pid=5745 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:44.377658 kernel: audit: type=1103 audit(1742237024.371:575): pid=5748 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:44.371000 audit[5748]: CRED_ACQ pid=5748 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:44.470768 sshd[5745]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:44.471000 audit[5745]: USER_END pid=5745 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:44.473703 systemd[1]: sshd@26-10.0.0.81:22-10.0.0.1:38340.service: Deactivated successfully. Mar 17 18:43:44.471000 audit[5745]: CRED_DISP pid=5745 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:44.474476 systemd[1]: session-27.scope: Deactivated successfully. Mar 17 18:43:44.479507 systemd-logind[1293]: Session 27 logged out. Waiting for processes to exit. Mar 17 18:43:44.480220 kernel: audit: type=1106 audit(1742237024.471:576): pid=5745 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:44.480360 kernel: audit: type=1104 audit(1742237024.471:577): pid=5745 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:44.480207 systemd-logind[1293]: Removed session 27. Mar 17 18:43:44.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.81:22-10.0.0.1:38340 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:43:44.510000 audit[5761]: NETFILTER_CFG table=filter:124 family=2 entries=20 op=nft_register_rule pid=5761 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:43:44.510000 audit[5761]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffe625465b0 a2=0 a3=7ffe6254659c items=0 ppid=2420 pid=5761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:44.510000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:43:44.517000 audit[5761]: NETFILTER_CFG table=nat:125 family=2 entries=106 op=nft_register_chain pid=5761 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:43:44.517000 audit[5761]: SYSCALL arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7ffe625465b0 a2=0 a3=7ffe6254659c items=0 ppid=2420 pid=5761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:44.517000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:43:47.450335 kubelet[2216]: E0317 18:43:47.450281 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:49.474218 systemd[1]: Started sshd@27-10.0.0.81:22-10.0.0.1:52260.service. Mar 17 18:43:49.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.81:22-10.0.0.1:52260 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:43:49.475457 kernel: kauditd_printk_skb: 7 callbacks suppressed Mar 17 18:43:49.475523 kernel: audit: type=1130 audit(1742237029.473:581): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.81:22-10.0.0.1:52260 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:43:49.505000 audit[5785]: USER_ACCT pid=5785 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:49.506222 sshd[5785]: Accepted publickey for core from 10.0.0.1 port 52260 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:43:49.510830 sshd[5785]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:49.509000 audit[5785]: CRED_ACQ pid=5785 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:49.515520 systemd-logind[1293]: New session 28 of user core. Mar 17 18:43:49.516200 kernel: audit: type=1101 audit(1742237029.505:582): pid=5785 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:49.516234 kernel: audit: type=1103 audit(1742237029.509:583): pid=5785 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:49.516470 kernel: audit: type=1006 audit(1742237029.509:584): pid=5785 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Mar 17 18:43:49.516657 systemd[1]: Started session-28.scope. Mar 17 18:43:49.509000 audit[5785]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff71cbe2b0 a2=3 a3=0 items=0 ppid=1 pid=5785 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:49.523970 kernel: audit: type=1300 audit(1742237029.509:584): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff71cbe2b0 a2=3 a3=0 items=0 ppid=1 pid=5785 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:49.509000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:43:49.526056 kernel: audit: type=1327 audit(1742237029.509:584): proctitle=737368643A20636F7265205B707269765D Mar 17 18:43:49.520000 audit[5785]: USER_START pid=5785 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:49.532146 kernel: audit: type=1105 audit(1742237029.520:585): pid=5785 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:49.532196 kernel: audit: type=1103 audit(1742237029.521:586): pid=5788 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:49.521000 audit[5788]: CRED_ACQ pid=5788 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:49.638658 sshd[5785]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:49.638000 audit[5785]: USER_END pid=5785 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:49.641331 systemd[1]: sshd@27-10.0.0.81:22-10.0.0.1:52260.service: Deactivated successfully. Mar 17 18:43:49.642386 systemd[1]: session-28.scope: Deactivated successfully. Mar 17 18:43:49.639000 audit[5785]: CRED_DISP pid=5785 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:49.646726 systemd-logind[1293]: Session 28 logged out. Waiting for processes to exit. Mar 17 18:43:49.647565 systemd-logind[1293]: Removed session 28. Mar 17 18:43:49.648104 kernel: audit: type=1106 audit(1742237029.638:587): pid=5785 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:49.648168 kernel: audit: type=1104 audit(1742237029.639:588): pid=5785 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:49.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.81:22-10.0.0.1:52260 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:43:54.642053 systemd[1]: Started sshd@28-10.0.0.81:22-10.0.0.1:52272.service. Mar 17 18:43:54.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.81:22-10.0.0.1:52272 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:43:54.643243 kernel: kauditd_printk_skb: 1 callbacks suppressed Mar 17 18:43:54.643309 kernel: audit: type=1130 audit(1742237034.641:590): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.81:22-10.0.0.1:52272 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:43:54.671000 audit[5802]: USER_ACCT pid=5802 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:54.673038 sshd[5802]: Accepted publickey for core from 10.0.0.1 port 52272 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:43:54.675000 audit[5802]: CRED_ACQ pid=5802 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:54.676972 kernel: audit: type=1101 audit(1742237034.671:591): pid=5802 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:54.677012 kernel: audit: type=1103 audit(1742237034.675:592): pid=5802 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:54.677132 sshd[5802]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:54.682928 kernel: audit: type=1006 audit(1742237034.675:593): pid=5802 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Mar 17 18:43:54.683061 kernel: audit: type=1300 audit(1742237034.675:593): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc7bd4e760 a2=3 a3=0 items=0 ppid=1 pid=5802 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:54.675000 audit[5802]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc7bd4e760 a2=3 a3=0 items=0 ppid=1 pid=5802 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:54.685550 systemd-logind[1293]: New session 29 of user core. Mar 17 18:43:54.686256 systemd[1]: Started session-29.scope. Mar 17 18:43:54.675000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:43:54.688306 kernel: audit: type=1327 audit(1742237034.675:593): proctitle=737368643A20636F7265205B707269765D Mar 17 18:43:54.690000 audit[5802]: USER_START pid=5802 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:54.694000 audit[5806]: CRED_ACQ pid=5806 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:54.698942 kernel: audit: type=1105 audit(1742237034.690:594): pid=5802 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:54.698998 kernel: audit: type=1103 audit(1742237034.694:595): pid=5806 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:54.806330 sshd[5802]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:54.806000 audit[5802]: USER_END pid=5802 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:54.808347 systemd[1]: sshd@28-10.0.0.81:22-10.0.0.1:52272.service: Deactivated successfully. Mar 17 18:43:54.809598 systemd[1]: session-29.scope: Deactivated successfully. Mar 17 18:43:54.809611 systemd-logind[1293]: Session 29 logged out. Waiting for processes to exit. Mar 17 18:43:54.810675 systemd-logind[1293]: Removed session 29. Mar 17 18:43:54.806000 audit[5802]: CRED_DISP pid=5802 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:54.815979 kernel: audit: type=1106 audit(1742237034.806:596): pid=5802 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:54.816055 kernel: audit: type=1104 audit(1742237034.806:597): pid=5802 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:54.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.81:22-10.0.0.1:52272 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:43:55.102896 env[1309]: time="2025-03-17T18:43:55.102837170Z" level=info msg="StopPodSandbox for \"546d66e230280017c9e83f127fea57d74b165ca11c271d28d9e919c656ba2223\"" Mar 17 18:43:55.103385 env[1309]: time="2025-03-17T18:43:55.102957036Z" level=info msg="TearDown network for sandbox \"546d66e230280017c9e83f127fea57d74b165ca11c271d28d9e919c656ba2223\" successfully" Mar 17 18:43:55.103385 env[1309]: time="2025-03-17T18:43:55.102994657Z" level=info msg="StopPodSandbox for \"546d66e230280017c9e83f127fea57d74b165ca11c271d28d9e919c656ba2223\" returns successfully" Mar 17 18:43:55.105334 env[1309]: time="2025-03-17T18:43:55.105308821Z" level=info msg="RemovePodSandbox for \"546d66e230280017c9e83f127fea57d74b165ca11c271d28d9e919c656ba2223\"" Mar 17 18:43:55.105404 env[1309]: time="2025-03-17T18:43:55.105336793Z" level=info msg="Forcibly stopping sandbox \"546d66e230280017c9e83f127fea57d74b165ca11c271d28d9e919c656ba2223\"" Mar 17 18:43:55.105404 env[1309]: time="2025-03-17T18:43:55.105392118Z" level=info msg="TearDown network for sandbox \"546d66e230280017c9e83f127fea57d74b165ca11c271d28d9e919c656ba2223\" successfully" Mar 17 18:43:55.114622 env[1309]: time="2025-03-17T18:43:55.114562118Z" level=info msg="RemovePodSandbox \"546d66e230280017c9e83f127fea57d74b165ca11c271d28d9e919c656ba2223\" returns successfully" Mar 17 18:43:55.115368 env[1309]: time="2025-03-17T18:43:55.115327013Z" level=info msg="StopPodSandbox for \"2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7\"" Mar 17 18:43:55.209428 env[1309]: 2025-03-17 18:43:55.171 [WARNING][5836] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9bd99c456--2chqm-eth0", GenerateName:"calico-apiserver-9bd99c456-", Namespace:"calico-apiserver", SelfLink:"", UID:"e63620b8-dbf8-4096-bfe9-5ecbfa441daf", ResourceVersion:"1120", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 42, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9bd99c456", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aee38d0fdbbe963365defeea3ac1a6fbbaff7136cfc1793e76a9346a972118c9", Pod:"calico-apiserver-9bd99c456-2chqm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali777a0388350", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:43:55.209428 env[1309]: 2025-03-17 18:43:55.171 [INFO][5836] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7" Mar 17 18:43:55.209428 env[1309]: 2025-03-17 18:43:55.171 [INFO][5836] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7" iface="eth0" netns="" Mar 17 18:43:55.209428 env[1309]: 2025-03-17 18:43:55.171 [INFO][5836] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7" Mar 17 18:43:55.209428 env[1309]: 2025-03-17 18:43:55.171 [INFO][5836] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7" Mar 17 18:43:55.209428 env[1309]: 2025-03-17 18:43:55.198 [INFO][5844] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7" HandleID="k8s-pod-network.2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7" Workload="localhost-k8s-calico--apiserver--9bd99c456--2chqm-eth0" Mar 17 18:43:55.209428 env[1309]: 2025-03-17 18:43:55.198 [INFO][5844] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:43:55.209428 env[1309]: 2025-03-17 18:43:55.198 [INFO][5844] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:43:55.209428 env[1309]: 2025-03-17 18:43:55.204 [WARNING][5844] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7" HandleID="k8s-pod-network.2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7" Workload="localhost-k8s-calico--apiserver--9bd99c456--2chqm-eth0" Mar 17 18:43:55.209428 env[1309]: 2025-03-17 18:43:55.204 [INFO][5844] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7" HandleID="k8s-pod-network.2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7" Workload="localhost-k8s-calico--apiserver--9bd99c456--2chqm-eth0" Mar 17 18:43:55.209428 env[1309]: 2025-03-17 18:43:55.205 [INFO][5844] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:43:55.209428 env[1309]: 2025-03-17 18:43:55.207 [INFO][5836] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7" Mar 17 18:43:55.210261 env[1309]: time="2025-03-17T18:43:55.209466599Z" level=info msg="TearDown network for sandbox \"2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7\" successfully" Mar 17 18:43:55.210261 env[1309]: time="2025-03-17T18:43:55.209502406Z" level=info msg="StopPodSandbox for \"2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7\" returns successfully" Mar 17 18:43:55.210261 env[1309]: time="2025-03-17T18:43:55.210038489Z" level=info msg="RemovePodSandbox for \"2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7\"" Mar 17 18:43:55.210261 env[1309]: time="2025-03-17T18:43:55.210087452Z" level=info msg="Forcibly stopping sandbox \"2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7\"" Mar 17 18:43:55.291984 env[1309]: 2025-03-17 18:43:55.251 [WARNING][5867] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9bd99c456--2chqm-eth0", GenerateName:"calico-apiserver-9bd99c456-", Namespace:"calico-apiserver", SelfLink:"", UID:"e63620b8-dbf8-4096-bfe9-5ecbfa441daf", ResourceVersion:"1120", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 42, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9bd99c456", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aee38d0fdbbe963365defeea3ac1a6fbbaff7136cfc1793e76a9346a972118c9", Pod:"calico-apiserver-9bd99c456-2chqm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali777a0388350", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:43:55.291984 env[1309]: 2025-03-17 18:43:55.251 [INFO][5867] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7" Mar 17 18:43:55.291984 env[1309]: 2025-03-17 18:43:55.251 [INFO][5867] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7" iface="eth0" netns="" Mar 17 18:43:55.291984 env[1309]: 2025-03-17 18:43:55.251 [INFO][5867] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7" Mar 17 18:43:55.291984 env[1309]: 2025-03-17 18:43:55.251 [INFO][5867] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7" Mar 17 18:43:55.291984 env[1309]: 2025-03-17 18:43:55.269 [INFO][5874] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7" HandleID="k8s-pod-network.2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7" Workload="localhost-k8s-calico--apiserver--9bd99c456--2chqm-eth0" Mar 17 18:43:55.291984 env[1309]: 2025-03-17 18:43:55.269 [INFO][5874] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:43:55.291984 env[1309]: 2025-03-17 18:43:55.269 [INFO][5874] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:43:55.291984 env[1309]: 2025-03-17 18:43:55.281 [WARNING][5874] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7" HandleID="k8s-pod-network.2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7" Workload="localhost-k8s-calico--apiserver--9bd99c456--2chqm-eth0" Mar 17 18:43:55.291984 env[1309]: 2025-03-17 18:43:55.281 [INFO][5874] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7" HandleID="k8s-pod-network.2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7" Workload="localhost-k8s-calico--apiserver--9bd99c456--2chqm-eth0" Mar 17 18:43:55.291984 env[1309]: 2025-03-17 18:43:55.283 [INFO][5874] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:43:55.291984 env[1309]: 2025-03-17 18:43:55.290 [INFO][5867] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7" Mar 17 18:43:55.292702 env[1309]: time="2025-03-17T18:43:55.292050587Z" level=info msg="TearDown network for sandbox \"2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7\" successfully" Mar 17 18:43:55.298523 env[1309]: time="2025-03-17T18:43:55.298483513Z" level=info msg="RemovePodSandbox \"2bfd019e7b39e074ce9a392a0043cc12670235618247147100514b63850ca1e7\" returns successfully" Mar 17 18:43:55.299032 env[1309]: time="2025-03-17T18:43:55.298973108Z" level=info msg="StopPodSandbox for \"f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9\"" Mar 17 18:43:55.367804 env[1309]: 2025-03-17 18:43:55.334 [WARNING][5898] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--75cb744445--d9hht-eth0", GenerateName:"calico-kube-controllers-75cb744445-", Namespace:"calico-system", SelfLink:"", UID:"7b65dbfa-20f5-4d4f-8c03-972a550ae421", ResourceVersion:"1249", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 42, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75cb744445", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f8c62f297a29d2f13edc0dadd57bd1f527c8c2a5c208e13ec1091f9891176701", Pod:"calico-kube-controllers-75cb744445-d9hht", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9f312783f28", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:43:55.367804 env[1309]: 2025-03-17 18:43:55.335 [INFO][5898] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9" Mar 17 18:43:55.367804 env[1309]: 2025-03-17 18:43:55.335 [INFO][5898] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9" iface="eth0" netns="" Mar 17 18:43:55.367804 env[1309]: 2025-03-17 18:43:55.335 [INFO][5898] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9" Mar 17 18:43:55.367804 env[1309]: 2025-03-17 18:43:55.335 [INFO][5898] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9" Mar 17 18:43:55.367804 env[1309]: 2025-03-17 18:43:55.357 [INFO][5905] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9" HandleID="k8s-pod-network.f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9" Workload="localhost-k8s-calico--kube--controllers--75cb744445--d9hht-eth0" Mar 17 18:43:55.367804 env[1309]: 2025-03-17 18:43:55.357 [INFO][5905] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:43:55.367804 env[1309]: 2025-03-17 18:43:55.357 [INFO][5905] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:43:55.367804 env[1309]: 2025-03-17 18:43:55.362 [WARNING][5905] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9" HandleID="k8s-pod-network.f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9" Workload="localhost-k8s-calico--kube--controllers--75cb744445--d9hht-eth0" Mar 17 18:43:55.367804 env[1309]: 2025-03-17 18:43:55.362 [INFO][5905] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9" HandleID="k8s-pod-network.f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9" Workload="localhost-k8s-calico--kube--controllers--75cb744445--d9hht-eth0" Mar 17 18:43:55.367804 env[1309]: 2025-03-17 18:43:55.364 [INFO][5905] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:43:55.367804 env[1309]: 2025-03-17 18:43:55.365 [INFO][5898] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9" Mar 17 18:43:55.368599 env[1309]: time="2025-03-17T18:43:55.368544054Z" level=info msg="TearDown network for sandbox \"f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9\" successfully" Mar 17 18:43:55.368599 env[1309]: time="2025-03-17T18:43:55.368588388Z" level=info msg="StopPodSandbox for \"f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9\" returns successfully" Mar 17 18:43:55.369346 env[1309]: time="2025-03-17T18:43:55.369317656Z" level=info msg="RemovePodSandbox for \"f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9\"" Mar 17 18:43:55.369507 env[1309]: time="2025-03-17T18:43:55.369428656Z" level=info msg="Forcibly stopping sandbox \"f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9\"" Mar 17 18:43:55.435270 env[1309]: 2025-03-17 18:43:55.405 [WARNING][5928] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--75cb744445--d9hht-eth0", GenerateName:"calico-kube-controllers-75cb744445-", Namespace:"calico-system", SelfLink:"", UID:"7b65dbfa-20f5-4d4f-8c03-972a550ae421", ResourceVersion:"1249", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 42, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75cb744445", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f8c62f297a29d2f13edc0dadd57bd1f527c8c2a5c208e13ec1091f9891176701", Pod:"calico-kube-controllers-75cb744445-d9hht", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9f312783f28", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:43:55.435270 env[1309]: 2025-03-17 18:43:55.405 [INFO][5928] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9" Mar 17 18:43:55.435270 env[1309]: 2025-03-17 18:43:55.405 [INFO][5928] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9" iface="eth0" netns="" Mar 17 18:43:55.435270 env[1309]: 2025-03-17 18:43:55.405 [INFO][5928] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9" Mar 17 18:43:55.435270 env[1309]: 2025-03-17 18:43:55.405 [INFO][5928] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9" Mar 17 18:43:55.435270 env[1309]: 2025-03-17 18:43:55.426 [INFO][5935] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9" HandleID="k8s-pod-network.f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9" Workload="localhost-k8s-calico--kube--controllers--75cb744445--d9hht-eth0" Mar 17 18:43:55.435270 env[1309]: 2025-03-17 18:43:55.426 [INFO][5935] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:43:55.435270 env[1309]: 2025-03-17 18:43:55.426 [INFO][5935] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:43:55.435270 env[1309]: 2025-03-17 18:43:55.431 [WARNING][5935] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9" HandleID="k8s-pod-network.f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9" Workload="localhost-k8s-calico--kube--controllers--75cb744445--d9hht-eth0" Mar 17 18:43:55.435270 env[1309]: 2025-03-17 18:43:55.431 [INFO][5935] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9" HandleID="k8s-pod-network.f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9" Workload="localhost-k8s-calico--kube--controllers--75cb744445--d9hht-eth0" Mar 17 18:43:55.435270 env[1309]: 2025-03-17 18:43:55.432 [INFO][5935] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:43:55.435270 env[1309]: 2025-03-17 18:43:55.433 [INFO][5928] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9" Mar 17 18:43:55.435984 env[1309]: time="2025-03-17T18:43:55.435301032Z" level=info msg="TearDown network for sandbox \"f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9\" successfully" Mar 17 18:43:55.438674 env[1309]: time="2025-03-17T18:43:55.438643468Z" level=info msg="RemovePodSandbox \"f3fd06757146f37a016a9814999ea28091cca1781075cc29259031b7d3ab8de9\" returns successfully" Mar 17 18:43:55.439203 env[1309]: time="2025-03-17T18:43:55.439156288Z" level=info msg="StopPodSandbox for \"a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb\"" Mar 17 18:43:55.511440 env[1309]: 2025-03-17 18:43:55.473 [WARNING][5958] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rwbh5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"06aa47e7-14c4-4c99-9d64-88ed0bae7c98", ResourceVersion:"1226", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 42, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"44575608180746b3fb7c70f760365993b7a02fc995b9d444470651f0f3d78d4f", Pod:"csi-node-driver-rwbh5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif83b62df300", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:43:55.511440 env[1309]: 2025-03-17 18:43:55.473 [INFO][5958] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb" Mar 17 18:43:55.511440 env[1309]: 2025-03-17 18:43:55.474 [INFO][5958] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb" iface="eth0" netns="" Mar 17 18:43:55.511440 env[1309]: 2025-03-17 18:43:55.474 [INFO][5958] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb" Mar 17 18:43:55.511440 env[1309]: 2025-03-17 18:43:55.474 [INFO][5958] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb" Mar 17 18:43:55.511440 env[1309]: 2025-03-17 18:43:55.500 [INFO][5966] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb" HandleID="k8s-pod-network.a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb" Workload="localhost-k8s-csi--node--driver--rwbh5-eth0" Mar 17 18:43:55.511440 env[1309]: 2025-03-17 18:43:55.500 [INFO][5966] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:43:55.511440 env[1309]: 2025-03-17 18:43:55.500 [INFO][5966] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:43:55.511440 env[1309]: 2025-03-17 18:43:55.506 [WARNING][5966] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb" HandleID="k8s-pod-network.a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb" Workload="localhost-k8s-csi--node--driver--rwbh5-eth0" Mar 17 18:43:55.511440 env[1309]: 2025-03-17 18:43:55.506 [INFO][5966] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb" HandleID="k8s-pod-network.a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb" Workload="localhost-k8s-csi--node--driver--rwbh5-eth0" Mar 17 18:43:55.511440 env[1309]: 2025-03-17 18:43:55.507 [INFO][5966] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:43:55.511440 env[1309]: 2025-03-17 18:43:55.509 [INFO][5958] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb" Mar 17 18:43:55.512090 env[1309]: time="2025-03-17T18:43:55.511436553Z" level=info msg="TearDown network for sandbox \"a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb\" successfully" Mar 17 18:43:55.512090 env[1309]: time="2025-03-17T18:43:55.511467922Z" level=info msg="StopPodSandbox for \"a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb\" returns successfully" Mar 17 18:43:55.512090 env[1309]: time="2025-03-17T18:43:55.511816131Z" level=info msg="RemovePodSandbox for \"a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb\"" Mar 17 18:43:55.512090 env[1309]: time="2025-03-17T18:43:55.511839755Z" level=info msg="Forcibly stopping sandbox \"a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb\"" Mar 17 18:43:55.593980 env[1309]: 2025-03-17 18:43:55.559 [WARNING][5990] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rwbh5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"06aa47e7-14c4-4c99-9d64-88ed0bae7c98", ResourceVersion:"1226", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 42, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"44575608180746b3fb7c70f760365993b7a02fc995b9d444470651f0f3d78d4f", Pod:"csi-node-driver-rwbh5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif83b62df300", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:43:55.593980 env[1309]: 2025-03-17 18:43:55.559 [INFO][5990] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb" Mar 17 18:43:55.593980 env[1309]: 2025-03-17 18:43:55.560 [INFO][5990] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb" iface="eth0" netns="" Mar 17 18:43:55.593980 env[1309]: 2025-03-17 18:43:55.560 [INFO][5990] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb" Mar 17 18:43:55.593980 env[1309]: 2025-03-17 18:43:55.560 [INFO][5990] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb" Mar 17 18:43:55.593980 env[1309]: 2025-03-17 18:43:55.583 [INFO][5998] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb" HandleID="k8s-pod-network.a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb" Workload="localhost-k8s-csi--node--driver--rwbh5-eth0" Mar 17 18:43:55.593980 env[1309]: 2025-03-17 18:43:55.583 [INFO][5998] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:43:55.593980 env[1309]: 2025-03-17 18:43:55.583 [INFO][5998] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:43:55.593980 env[1309]: 2025-03-17 18:43:55.589 [WARNING][5998] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb" HandleID="k8s-pod-network.a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb" Workload="localhost-k8s-csi--node--driver--rwbh5-eth0" Mar 17 18:43:55.593980 env[1309]: 2025-03-17 18:43:55.589 [INFO][5998] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb" HandleID="k8s-pod-network.a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb" Workload="localhost-k8s-csi--node--driver--rwbh5-eth0" Mar 17 18:43:55.593980 env[1309]: 2025-03-17 18:43:55.590 [INFO][5998] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:43:55.593980 env[1309]: 2025-03-17 18:43:55.592 [INFO][5990] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb" Mar 17 18:43:55.593980 env[1309]: time="2025-03-17T18:43:55.593921703Z" level=info msg="TearDown network for sandbox \"a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb\" successfully" Mar 17 18:43:55.597201 env[1309]: time="2025-03-17T18:43:55.597160816Z" level=info msg="RemovePodSandbox \"a7e102b329c12c40cb905dcd7a54b9bb1f82365e8bc4ebcf305dc3a6ef5da0bb\" returns successfully" Mar 17 18:43:55.597711 env[1309]: time="2025-03-17T18:43:55.597673815Z" level=info msg="StopPodSandbox for \"92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3\"" Mar 17 18:43:55.668326 env[1309]: 2025-03-17 18:43:55.630 [WARNING][6021] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9bd99c456--c2kg7-eth0", GenerateName:"calico-apiserver-9bd99c456-", Namespace:"calico-apiserver", SelfLink:"", UID:"fe03e3e2-d761-40da-81d3-dd75baa1eeea", ResourceVersion:"1111", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 42, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9bd99c456", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"21badf43d7dfa03f1316746d88e3371c09dfb0e61b18bfa7e1e45b85197b8e0d", Pod:"calico-apiserver-9bd99c456-c2kg7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali012642b78b8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:43:55.668326 env[1309]: 2025-03-17 18:43:55.630 [INFO][6021] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3" Mar 17 18:43:55.668326 env[1309]: 2025-03-17 18:43:55.630 [INFO][6021] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3" iface="eth0" netns="" Mar 17 18:43:55.668326 env[1309]: 2025-03-17 18:43:55.630 [INFO][6021] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3" Mar 17 18:43:55.668326 env[1309]: 2025-03-17 18:43:55.630 [INFO][6021] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3" Mar 17 18:43:55.668326 env[1309]: 2025-03-17 18:43:55.656 [INFO][6029] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3" HandleID="k8s-pod-network.92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3" Workload="localhost-k8s-calico--apiserver--9bd99c456--c2kg7-eth0" Mar 17 18:43:55.668326 env[1309]: 2025-03-17 18:43:55.656 [INFO][6029] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:43:55.668326 env[1309]: 2025-03-17 18:43:55.656 [INFO][6029] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:43:55.668326 env[1309]: 2025-03-17 18:43:55.662 [WARNING][6029] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3" HandleID="k8s-pod-network.92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3" Workload="localhost-k8s-calico--apiserver--9bd99c456--c2kg7-eth0" Mar 17 18:43:55.668326 env[1309]: 2025-03-17 18:43:55.662 [INFO][6029] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3" HandleID="k8s-pod-network.92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3" Workload="localhost-k8s-calico--apiserver--9bd99c456--c2kg7-eth0" Mar 17 18:43:55.668326 env[1309]: 2025-03-17 18:43:55.664 [INFO][6029] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:43:55.668326 env[1309]: 2025-03-17 18:43:55.666 [INFO][6021] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3" Mar 17 18:43:55.668326 env[1309]: time="2025-03-17T18:43:55.668287802Z" level=info msg="TearDown network for sandbox \"92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3\" successfully" Mar 17 18:43:55.669053 env[1309]: time="2025-03-17T18:43:55.668349959Z" level=info msg="StopPodSandbox for \"92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3\" returns successfully" Mar 17 18:43:55.672432 env[1309]: time="2025-03-17T18:43:55.670041145Z" level=info msg="RemovePodSandbox for \"92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3\"" Mar 17 18:43:55.672432 env[1309]: time="2025-03-17T18:43:55.670087021Z" level=info msg="Forcibly stopping sandbox \"92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3\"" Mar 17 18:43:55.745724 env[1309]: 2025-03-17 18:43:55.714 [WARNING][6052] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9bd99c456--c2kg7-eth0", GenerateName:"calico-apiserver-9bd99c456-", Namespace:"calico-apiserver", SelfLink:"", UID:"fe03e3e2-d761-40da-81d3-dd75baa1eeea", ResourceVersion:"1111", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 42, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9bd99c456", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"21badf43d7dfa03f1316746d88e3371c09dfb0e61b18bfa7e1e45b85197b8e0d", Pod:"calico-apiserver-9bd99c456-c2kg7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali012642b78b8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:43:55.745724 env[1309]: 2025-03-17 18:43:55.714 [INFO][6052] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3" Mar 17 18:43:55.745724 env[1309]: 2025-03-17 18:43:55.714 [INFO][6052] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3" iface="eth0" netns="" Mar 17 18:43:55.745724 env[1309]: 2025-03-17 18:43:55.714 [INFO][6052] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3" Mar 17 18:43:55.745724 env[1309]: 2025-03-17 18:43:55.714 [INFO][6052] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3" Mar 17 18:43:55.745724 env[1309]: 2025-03-17 18:43:55.737 [INFO][6059] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3" HandleID="k8s-pod-network.92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3" Workload="localhost-k8s-calico--apiserver--9bd99c456--c2kg7-eth0" Mar 17 18:43:55.745724 env[1309]: 2025-03-17 18:43:55.737 [INFO][6059] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:43:55.745724 env[1309]: 2025-03-17 18:43:55.737 [INFO][6059] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:43:55.745724 env[1309]: 2025-03-17 18:43:55.742 [WARNING][6059] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3" HandleID="k8s-pod-network.92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3" Workload="localhost-k8s-calico--apiserver--9bd99c456--c2kg7-eth0" Mar 17 18:43:55.745724 env[1309]: 2025-03-17 18:43:55.742 [INFO][6059] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3" HandleID="k8s-pod-network.92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3" Workload="localhost-k8s-calico--apiserver--9bd99c456--c2kg7-eth0" Mar 17 18:43:55.745724 env[1309]: 2025-03-17 18:43:55.743 [INFO][6059] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:43:55.745724 env[1309]: 2025-03-17 18:43:55.744 [INFO][6052] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3" Mar 17 18:43:55.746358 env[1309]: time="2025-03-17T18:43:55.745750852Z" level=info msg="TearDown network for sandbox \"92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3\" successfully" Mar 17 18:43:55.748959 env[1309]: time="2025-03-17T18:43:55.748923928Z" level=info msg="RemovePodSandbox \"92d44d78b54b50f174076126f77f425dd676d9425a278f0e240d4456f1b32aa3\" returns successfully" Mar 17 18:43:55.749498 env[1309]: time="2025-03-17T18:43:55.749467636Z" level=info msg="StopPodSandbox for \"f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af\"" Mar 17 18:43:55.811983 env[1309]: 2025-03-17 18:43:55.784 [WARNING][6082] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--qvfwr-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"182af1a9-314d-44fe-9729-65e5c19acc5a", ResourceVersion:"1195", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 42, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"51e3ac3348e8c59f85a8113331c9d188eac06bdad9caeaf7c62de6c82c4ca7dc", Pod:"coredns-7db6d8ff4d-qvfwr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali10dbb1f0f29", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:43:55.811983 env[1309]: 2025-03-17 18:43:55.785 [INFO][6082] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af" Mar 17 18:43:55.811983 env[1309]: 2025-03-17 18:43:55.785 [INFO][6082] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af" iface="eth0" netns="" Mar 17 18:43:55.811983 env[1309]: 2025-03-17 18:43:55.785 [INFO][6082] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af" Mar 17 18:43:55.811983 env[1309]: 2025-03-17 18:43:55.785 [INFO][6082] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af" Mar 17 18:43:55.811983 env[1309]: 2025-03-17 18:43:55.802 [INFO][6089] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af" HandleID="k8s-pod-network.f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af" Workload="localhost-k8s-coredns--7db6d8ff4d--qvfwr-eth0" Mar 17 18:43:55.811983 env[1309]: 2025-03-17 18:43:55.802 [INFO][6089] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:43:55.811983 env[1309]: 2025-03-17 18:43:55.802 [INFO][6089] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:43:55.811983 env[1309]: 2025-03-17 18:43:55.807 [WARNING][6089] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af" HandleID="k8s-pod-network.f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af" Workload="localhost-k8s-coredns--7db6d8ff4d--qvfwr-eth0" Mar 17 18:43:55.811983 env[1309]: 2025-03-17 18:43:55.807 [INFO][6089] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af" HandleID="k8s-pod-network.f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af" Workload="localhost-k8s-coredns--7db6d8ff4d--qvfwr-eth0" Mar 17 18:43:55.811983 env[1309]: 2025-03-17 18:43:55.808 [INFO][6089] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:43:55.811983 env[1309]: 2025-03-17 18:43:55.810 [INFO][6082] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af" Mar 17 18:43:55.812442 env[1309]: time="2025-03-17T18:43:55.811997655Z" level=info msg="TearDown network for sandbox \"f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af\" successfully" Mar 17 18:43:55.812442 env[1309]: time="2025-03-17T18:43:55.812025919Z" level=info msg="StopPodSandbox for \"f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af\" returns successfully" Mar 17 18:43:55.812596 env[1309]: time="2025-03-17T18:43:55.812541653Z" level=info msg="RemovePodSandbox for \"f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af\"" Mar 17 18:43:55.812641 env[1309]: time="2025-03-17T18:43:55.812585326Z" level=info msg="Forcibly stopping sandbox \"f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af\"" Mar 17 18:43:55.876416 env[1309]: 2025-03-17 18:43:55.846 [WARNING][6111] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--qvfwr-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"182af1a9-314d-44fe-9729-65e5c19acc5a", ResourceVersion:"1195", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 42, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"51e3ac3348e8c59f85a8113331c9d188eac06bdad9caeaf7c62de6c82c4ca7dc", Pod:"coredns-7db6d8ff4d-qvfwr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali10dbb1f0f29", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:43:55.876416 env[1309]: 2025-03-17 18:43:55.846 [INFO][6111] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af" Mar 17 18:43:55.876416 env[1309]: 2025-03-17 18:43:55.846 [INFO][6111] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af" iface="eth0" netns="" Mar 17 18:43:55.876416 env[1309]: 2025-03-17 18:43:55.847 [INFO][6111] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af" Mar 17 18:43:55.876416 env[1309]: 2025-03-17 18:43:55.847 [INFO][6111] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af" Mar 17 18:43:55.876416 env[1309]: 2025-03-17 18:43:55.866 [INFO][6119] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af" HandleID="k8s-pod-network.f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af" Workload="localhost-k8s-coredns--7db6d8ff4d--qvfwr-eth0" Mar 17 18:43:55.876416 env[1309]: 2025-03-17 18:43:55.866 [INFO][6119] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:43:55.876416 env[1309]: 2025-03-17 18:43:55.866 [INFO][6119] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:43:55.876416 env[1309]: 2025-03-17 18:43:55.871 [WARNING][6119] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af" HandleID="k8s-pod-network.f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af" Workload="localhost-k8s-coredns--7db6d8ff4d--qvfwr-eth0" Mar 17 18:43:55.876416 env[1309]: 2025-03-17 18:43:55.871 [INFO][6119] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af" HandleID="k8s-pod-network.f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af" Workload="localhost-k8s-coredns--7db6d8ff4d--qvfwr-eth0" Mar 17 18:43:55.876416 env[1309]: 2025-03-17 18:43:55.872 [INFO][6119] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:43:55.876416 env[1309]: 2025-03-17 18:43:55.874 [INFO][6111] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af" Mar 17 18:43:55.877124 env[1309]: time="2025-03-17T18:43:55.876447212Z" level=info msg="TearDown network for sandbox \"f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af\" successfully" Mar 17 18:43:55.880558 env[1309]: time="2025-03-17T18:43:55.880512565Z" level=info msg="RemovePodSandbox \"f968fa7b907271e6c9835c0290493f3dfe957fcc8047353a24e02cf2b40ab1af\" returns successfully" Mar 17 18:43:55.881229 env[1309]: time="2025-03-17T18:43:55.881143738Z" level=info msg="StopPodSandbox for \"2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035\"" Mar 17 18:43:55.964959 env[1309]: 2025-03-17 18:43:55.918 [WARNING][6141] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--k6vcp-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5a7c78d5-a1ee-4010-a9a6-a696fb4c1f04", ResourceVersion:"1228", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 42, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2b8eaf4ef64969cc2de05dc0f54f766d8b240d265a02f7eacf4c1d1f2b60d109", Pod:"coredns-7db6d8ff4d-k6vcp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib798aa684fc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:43:55.964959 env[1309]: 2025-03-17 18:43:55.918 [INFO][6141] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035" Mar 17 18:43:55.964959 env[1309]: 2025-03-17 18:43:55.918 [INFO][6141] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035" iface="eth0" netns="" Mar 17 18:43:55.964959 env[1309]: 2025-03-17 18:43:55.918 [INFO][6141] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035" Mar 17 18:43:55.964959 env[1309]: 2025-03-17 18:43:55.918 [INFO][6141] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035" Mar 17 18:43:55.964959 env[1309]: 2025-03-17 18:43:55.953 [INFO][6148] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035" HandleID="k8s-pod-network.2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035" Workload="localhost-k8s-coredns--7db6d8ff4d--k6vcp-eth0" Mar 17 18:43:55.964959 env[1309]: 2025-03-17 18:43:55.953 [INFO][6148] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:43:55.964959 env[1309]: 2025-03-17 18:43:55.953 [INFO][6148] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:43:55.964959 env[1309]: 2025-03-17 18:43:55.958 [WARNING][6148] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035" HandleID="k8s-pod-network.2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035" Workload="localhost-k8s-coredns--7db6d8ff4d--k6vcp-eth0" Mar 17 18:43:55.964959 env[1309]: 2025-03-17 18:43:55.958 [INFO][6148] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035" HandleID="k8s-pod-network.2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035" Workload="localhost-k8s-coredns--7db6d8ff4d--k6vcp-eth0" Mar 17 18:43:55.964959 env[1309]: 2025-03-17 18:43:55.960 [INFO][6148] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:43:55.964959 env[1309]: 2025-03-17 18:43:55.961 [INFO][6141] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035" Mar 17 18:43:55.965430 env[1309]: time="2025-03-17T18:43:55.964986875Z" level=info msg="TearDown network for sandbox \"2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035\" successfully" Mar 17 18:43:55.965430 env[1309]: time="2025-03-17T18:43:55.965017713Z" level=info msg="StopPodSandbox for \"2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035\" returns successfully" Mar 17 18:43:55.965521 env[1309]: time="2025-03-17T18:43:55.965489505Z" level=info msg="RemovePodSandbox for \"2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035\"" Mar 17 18:43:55.965614 env[1309]: time="2025-03-17T18:43:55.965571700Z" level=info msg="Forcibly stopping sandbox \"2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035\"" Mar 17 18:43:56.026792 env[1309]: 2025-03-17 18:43:55.999 [WARNING][6170] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--k6vcp-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5a7c78d5-a1ee-4010-a9a6-a696fb4c1f04", ResourceVersion:"1228", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 42, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2b8eaf4ef64969cc2de05dc0f54f766d8b240d265a02f7eacf4c1d1f2b60d109", Pod:"coredns-7db6d8ff4d-k6vcp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib798aa684fc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:43:56.026792 env[1309]: 2025-03-17 18:43:55.999 [INFO][6170] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035" Mar 17 18:43:56.026792 env[1309]: 2025-03-17 18:43:55.999 [INFO][6170] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035" iface="eth0" netns="" Mar 17 18:43:56.026792 env[1309]: 2025-03-17 18:43:55.999 [INFO][6170] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035" Mar 17 18:43:56.026792 env[1309]: 2025-03-17 18:43:55.999 [INFO][6170] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035" Mar 17 18:43:56.026792 env[1309]: 2025-03-17 18:43:56.018 [INFO][6178] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035" HandleID="k8s-pod-network.2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035" Workload="localhost-k8s-coredns--7db6d8ff4d--k6vcp-eth0" Mar 17 18:43:56.026792 env[1309]: 2025-03-17 18:43:56.018 [INFO][6178] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:43:56.026792 env[1309]: 2025-03-17 18:43:56.018 [INFO][6178] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:43:56.026792 env[1309]: 2025-03-17 18:43:56.022 [WARNING][6178] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035" HandleID="k8s-pod-network.2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035" Workload="localhost-k8s-coredns--7db6d8ff4d--k6vcp-eth0" Mar 17 18:43:56.026792 env[1309]: 2025-03-17 18:43:56.022 [INFO][6178] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035" HandleID="k8s-pod-network.2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035" Workload="localhost-k8s-coredns--7db6d8ff4d--k6vcp-eth0" Mar 17 18:43:56.026792 env[1309]: 2025-03-17 18:43:56.023 [INFO][6178] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:43:56.026792 env[1309]: 2025-03-17 18:43:56.025 [INFO][6170] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035" Mar 17 18:43:56.027248 env[1309]: time="2025-03-17T18:43:56.026821053Z" level=info msg="TearDown network for sandbox \"2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035\" successfully" Mar 17 18:43:56.030504 env[1309]: time="2025-03-17T18:43:56.030462944Z" level=info msg="RemovePodSandbox \"2813c6017befd4232e47bb290a3293071ef38b2077a6eac5f099ff751103e035\" returns successfully" Mar 17 18:43:59.809288 systemd[1]: Started sshd@29-10.0.0.81:22-10.0.0.1:45442.service. Mar 17 18:43:59.810897 kernel: kauditd_printk_skb: 1 callbacks suppressed Mar 17 18:43:59.810974 kernel: audit: type=1130 audit(1742237039.808:599): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.81:22-10.0.0.1:45442 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:43:59.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.81:22-10.0.0.1:45442 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:43:59.840000 audit[6186]: USER_ACCT pid=6186 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:59.841341 sshd[6186]: Accepted publickey for core from 10.0.0.1 port 45442 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:43:59.844000 audit[6186]: CRED_ACQ pid=6186 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:59.845667 sshd[6186]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:59.848831 kernel: audit: type=1101 audit(1742237039.840:600): pid=6186 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:59.848894 kernel: audit: type=1103 audit(1742237039.844:601): pid=6186 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:59.848912 kernel: audit: type=1006 audit(1742237039.844:602): pid=6186 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=30 res=1 Mar 17 18:43:59.850065 systemd-logind[1293]: New session 30 of user core. Mar 17 18:43:59.844000 audit[6186]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff0126cea0 a2=3 a3=0 items=0 ppid=1 pid=6186 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:59.851184 systemd[1]: Started session-30.scope. Mar 17 18:43:59.855027 kernel: audit: type=1300 audit(1742237039.844:602): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff0126cea0 a2=3 a3=0 items=0 ppid=1 pid=6186 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:43:59.855103 kernel: audit: type=1327 audit(1742237039.844:602): proctitle=737368643A20636F7265205B707269765D Mar 17 18:43:59.844000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:43:59.855000 audit[6186]: USER_START pid=6186 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:59.861247 kernel: audit: type=1105 audit(1742237039.855:603): pid=6186 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:59.865096 kernel: audit: type=1103 audit(1742237039.857:604): pid=6189 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:59.857000 audit[6189]: CRED_ACQ pid=6189 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:59.987322 sshd[6186]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:59.987000 audit[6186]: USER_END pid=6186 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:59.990458 systemd[1]: sshd@29-10.0.0.81:22-10.0.0.1:45442.service: Deactivated successfully. Mar 17 18:43:59.991673 systemd[1]: session-30.scope: Deactivated successfully. Mar 17 18:43:59.991694 systemd-logind[1293]: Session 30 logged out. Waiting for processes to exit. Mar 17 18:43:59.996542 kernel: audit: type=1106 audit(1742237039.987:605): pid=6186 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:59.996652 kernel: audit: type=1104 audit(1742237039.987:606): pid=6186 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:59.987000 audit[6186]: CRED_DISP pid=6186 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:43:59.993112 systemd-logind[1293]: Removed session 30. Mar 17 18:43:59.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.81:22-10.0.0.1:45442 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'